2014
DOI: 10.1007/978-3-319-10470-6_44
|View full text |Cite
|
Sign up to set email alerts
|

Crowdsourcing for Reference Correspondence Generation in Endoscopic Images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
32
0

Year Published

2014
2014
2017
2017

Publication Types

Select...
4
3

Relationship

3
4

Authors

Journals

citations
Cited by 34 publications
(32 citation statements)
references
References 8 publications
0
32
0
Order By: Relevance
“…It has successfully been used in several medical imaging applications, and a review over medical applications using crowdsourcing is given in [19]. Examples of medical imaging tasks solved by crowdsourcing include disease detection based on optical microscopy images [17], shape-based classification of polyps in computed tomography data [18], medical image classification [3,11], reference data [8,15,16] and training data generation [14] as well as the assessments of surgical skills [4]. But it has-to our knowledge-not yet been applied in the context of neuroimaging data annotation.…”
Section: Introductionmentioning
confidence: 99%
“…It has successfully been used in several medical imaging applications, and a review over medical applications using crowdsourcing is given in [19]. Examples of medical imaging tasks solved by crowdsourcing include disease detection based on optical microscopy images [17], shape-based classification of polyps in computed tomography data [18], medical image classification [3,11], reference data [8,15,16] and training data generation [14] as well as the assessments of surgical skills [4]. But it has-to our knowledge-not yet been applied in the context of neuroimaging data annotation.…”
Section: Introductionmentioning
confidence: 99%
“…Just like the KWs, the experts did not receive any training to complete the task. For a fair comparison, we ordered the images of Data Set I according to the median annotation performance obtained in an initial experiment [13] and then picked 10 images including the first and the last one (i.e., about every 11th image). This set of 100 features, each associated with 10 crowd annotations and five expert annotations, will be referred to as Data Set II.…”
Section: Data Acquisitionmentioning
confidence: 99%
“…For the data set applied in [13], the mean time required for obtaining 100 HITs (one per image) from MTurk was 77 ± 16 min, averaged over 10 requests (i.e., uploads of HITs). Hence, 10,000 annotations could be generated in about 12 h. The time from data upload until completion of 10,000 annotation tasks (10,000 for Data Set I and 2 × 10, 000 for Data Set II) in November 2014 ranged from 8 h 39 min to 13 h 10 min.…”
Section: Data Acquisitionmentioning
confidence: 99%
See 2 more Smart Citations