2013
DOI: 10.1007/s00778-013-0328-8
|View full text |Cite
|
Sign up to set email alerts
|

Hybrid entity clustering using crowds and data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 13 publications
(8 citation statements)
references
References 41 publications
0
8
0
Order By: Relevance
“…Other research for crowdsourcing ground truth includes entity clustering and disambiguation [18], Twitter entity extraction [13], multilingual entity extraction and paraphrasing [10], and taxonomy creation [11]. However, all of these approaches rely on the assumption that one blackand-white gold standard must exist for every task.…”
Section: Crowdsourcing Ground Truthmentioning
confidence: 99%
“…Other research for crowdsourcing ground truth includes entity clustering and disambiguation [18], Twitter entity extraction [13], multilingual entity extraction and paraphrasing [10], and taxonomy creation [11]. However, all of these approaches rely on the assumption that one blackand-white gold standard must exist for every task.…”
Section: Crowdsourcing Ground Truthmentioning
confidence: 99%
“…Second, some problems require huge tasks that it is even hard for experts to solve, such as protein folding problem [20]. A large set of answers from nonexperts are aggregated and presented as solution relying on the wisdom of the crowd [21]. Our work successfully unleashed the power of crowds in addressing the license violation problem that has been solely determined through manual inspection by experts.…”
Section: Related Workmentioning
confidence: 99%
“…Crowdsourcing has grown into a viable alternative to expert ground truth collection, as crowdsourcing tends to be both cheaper and more readily available than domain experts. Experiments have been carried out in a variety of tasks and domains: medical entity extraction [22,53,60], medical relation extraction [29,53], open-domain relation extraction [32], clustering and disambiguation [34], ontology evaluation [42], web resource classification [14] and taxonomy creation [11]. [51] have shown that aggregating the answers of an increasing number of unskilled crowd workers with majority vote can lead to high quality NLP training data.…”
Section: Crowdsourcing Ground Truthmentioning
confidence: 99%