Biocomputing 2015 2014
DOI: 10.1142/9789814644730_0028
|View full text |Cite
|
Sign up to set email alerts
|

Microtask Crowdsourcing for Disease Mention Annotation in Pubmed Abstracts

Abstract: Identifying concepts and relationships in biomedical text enables knowledge to be applied in computational analyses. Many biological natural language process (BioNLP) projects attempt to address this challenge, but the state of the art in BioNLP still leaves much room for improvement. Progress in BioNLP research depends on large, annotated corpora for evaluating information extraction systems and training machine learning models. Traditionally, such corpora are created by small numbers of expert annotators oft… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
42
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 38 publications
(43 citation statements)
references
References 11 publications
1
42
0
Order By: Relevance
“…Researchers’ projects have used AMT to complete a variety of tasks [40,41]. Recent research has shown that AMT and other crowdsourcing platforms can be used to generate corpora for clinical natural language processing and disease mention annotation [41,42]. AMT was used to detect errors in a medical ontology, and it was found that the crowd was as effective as the domain experts [43].…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Researchers’ projects have used AMT to complete a variety of tasks [40,41]. Recent research has shown that AMT and other crowdsourcing platforms can be used to generate corpora for clinical natural language processing and disease mention annotation [41,42]. AMT was used to detect errors in a medical ontology, and it was found that the crowd was as effective as the domain experts [43].…”
Section: Methodsmentioning
confidence: 99%
“…AMT was used to detect errors in a medical ontology, and it was found that the crowd was as effective as the domain experts [43]. In addition, AMT workers were engaged in identifying disease mentions in PubMed abstracts [42] and rank adverse drug reactions in order of severity [44] with good results.…”
Section: Methodsmentioning
confidence: 99%
“…The crowd has interpreted and annotated medical images and documents to contribute to databases for future research (68). The crowd has also assessed the skills of surgeons and played games to map retinal ganglion cell neurons in mice (913).…”
Section: Crowdsourcing As Tool For Biomedical Researchmentioning
confidence: 99%
“…Azzam and Harman showed Turkers were able to consistently rate the most important point in a transcript and identify supporting text segments from the transcript to explain their ratings (15). In similar study, Turkers highlighted words and phrases to indicate disease processes in PubMed abstracts after passing a qualification test and completing annotation trainings (6). In a span of 9 days and for a cost of under $640, Turkers annotated 589 abstracts 15 times and produced a disease mention annotation corpus similar to the gold standard (NCBI Disease corpus) (6).…”
Section: Crowdsourcing As Tool For Biomedical Researchmentioning
confidence: 99%
See 1 more Smart Citation