2015
DOI: 10.1093/database/bav016
|View full text |Cite
|
Sign up to set email alerts
|

Scaling drug indication curation through crowdsourcing

Abstract: Motivated by the high cost of human curation of biological databases, there is an increasing interest in using computational approaches to assist human curators and accelerate the manual curation process. Towards the goal of cataloging drug indications from FDA drug labels, we recently developed LabeledIn, a human-curated drug indication resource for 250 clinical drugs. Its development required over 40 h of human effort across 20 weeks, despite using well-defined annotation guidelines. In this study, we aim to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
37
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 36 publications
(38 citation statements)
references
References 40 publications
1
37
0
Order By: Relevance
“…18, 19, 23, 25, 26, 29-34, 37, 38, 40, 41, 45-49, 51 Additional strengths include the time-saving component of using MTurk, reliability, and high quality. Accurate, 34 effective, 29, 30, 51 performance comparable to quality of medical experts, 18, 26, 33, 34, 39, 41, 43, 48 high To compare the assessment of surgeons' technical performance of renal artery and vein dissection during robotic partial nephrectomy done by crowdsourced workers and expert surgeon graders Crowdsourced ratings on MTurk were highly correlated with surgical content experts' assessments.…”
Section: Resultsmentioning
confidence: 99%
“…18, 19, 23, 25, 26, 29-34, 37, 38, 40, 41, 45-49, 51 Additional strengths include the time-saving component of using MTurk, reliability, and high quality. Accurate, 34 effective, 29, 30, 51 performance comparable to quality of medical experts, 18, 26, 33, 34, 39, 41, 43, 48 high To compare the assessment of surgeons' technical performance of renal artery and vein dissection during robotic partial nephrectomy done by crowdsourced workers and expert surgeon graders Crowdsourced ratings on MTurk were highly correlated with surgical content experts' assessments.…”
Section: Resultsmentioning
confidence: 99%
“…Microtasks consist of relatively trivial tasks that require a large number of participants; for example, extracting features from images of cells 15 . Crowdsourcing microtask projects in biomedical research have been established to improve automated mining of biomedical text for annotating diseases 16 , curation of gene-mutation relations 17 , identifying relationships between drugs and side-effects 18 , drugs and their indications 19 , as well as annotation of microRNA functions 20 . These efforts produce large collections of high-quality datasets that can be further utilized by algorithms that can extract new knowledge from already-published data that require better annotation, cleaning and reprocessing.…”
mentioning
confidence: 99%
“…In biomedical research, this strategy can be divided into two principal types: microtasks and megatasks [19]. Microtasks are useful to achieve many simple tasks that together produce a quality resource, for example, genome annotation [20, 21], drug indication curation [22], extraction of gene expression signatures [23], and human gene-disease annotation [24], as well as many other examples in recent years [25]. Megatasks address more challenging problems and are set as a competition between teams or individual experts, for example, the reconstruction of the topology of biological networks, or the imputation of missing data by the development of novel algorithms [26].…”
Section: Educational and Other Efforts To Involve The Communitymentioning
confidence: 99%