2014
DOI: 10.1007/978-3-319-10470-6_55
|View full text |Cite
|
Sign up to set email alerts
|

Can Masses of Non-Experts Train Highly Accurate Image Classifiers?

Abstract: Machine learning algorithms are gaining increasing interest in the context of computer-assisted interventions. One of the bottlenecks so far, however, has been the availability of training data, typically generated by medical experts with very limited resources. Crowdsourcing is a new trend that is based on outsourcing cognitive tasks to many anonymous untrained individuals from an online community. In this work, we investigate the potential of crowdsourcing for segmenting medical instruments in endoscopic ima… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
44
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 67 publications
(44 citation statements)
references
References 10 publications
0
44
0
Order By: Relevance
“…Another approach to detect low-quality labels is to acquire multiple labels from each sample and rank them against the majority of the acquired labels [2], [12], [13], [14], [15]. A pixel of the image to be annotated is classified as belonging to the object if the majority of workers have classified it as object (e.g.…”
Section: Majority Votingmentioning
confidence: 99%
See 1 more Smart Citation
“…Another approach to detect low-quality labels is to acquire multiple labels from each sample and rank them against the majority of the acquired labels [2], [12], [13], [14], [15]. A pixel of the image to be annotated is classified as belonging to the object if the majority of workers have classified it as object (e.g.…”
Section: Majority Votingmentioning
confidence: 99%
“…A pixel of the image to be annotated is classified as belonging to the object if the majority of workers have classified it as object (e.g. [14]). The assumption behind the majority voting approach is that the majority of labels are of good quality, which is likely to result in a larger amount of labelling data being acquired from the crowd.…”
Section: Majority Votingmentioning
confidence: 99%
“…Exemplary tasks related to the last category include disease detection based on microscopic image analysis [8], shape-based classification of polyps in computed tomography data [9], fiber segmentation, 2 medical image classification [10], and surgical skill assessment [11] The goal of our work was to investigate whether crowdsourcing can be (part of) the solution to the large-scale data annotation problem in computer-assisted interventions. Two pilot studies in the context of instrument segmentation [12] and correspondence search [13], both presented at MIC-CAI 2014, showed that the quality of crowd annotations can compete with that of experts-but the crowd is orders of magnitude faster and also less expensive. This MICCAI special issue article is an extension of this work.…”
Section: Introductionmentioning
confidence: 98%
“…This method does not mark ROIs on the image. There is also some work in crowd-sourcing of medical image annotations [8,4]. The automatic approach in the current paper can be used as a complement to crowd-sourcing where images can be annotated with some preliminary contours before being edited by human annotators.…”
Section: Introductionmentioning
confidence: 99%