2016
DOI: 10.1167/tvst.5.5.6
|View full text |Cite|
|
Sign up to set email alerts
|

The Accuracy and Reliability of Crowdsource Annotations of Digital Retinal Images

Abstract: PurposeCrowdsourcing is based on outsourcing computationally intensive tasks to numerous individuals in the online community who have no formal training. Our aim was to develop a novel online tool designed to facilitate large-scale annotation of digital retinal images, and to assess the accuracy of crowdsource grading using this tool, comparing it to expert classification.MethodsWe used 100 retinal fundus photograph images with predetermined disease criteria selected by two experts from a large cohort study. T… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
36
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 35 publications
(36 citation statements)
references
References 21 publications
0
36
0
Order By: Relevance
“…Other studies focus on the creation of training sets for convolutional neural networks for finding nuclei or mitoses in cancer [8], [9]. Crowdsourcing has also been applied in labeling of retinal images [10], text annotation in radiology reports [11], or for delineation of a single object per image [12].…”
Section: Introductionmentioning
confidence: 99%
“…Other studies focus on the creation of training sets for convolutional neural networks for finding nuclei or mitoses in cancer [8], [9]. Crowdsourcing has also been applied in labeling of retinal images [10], text annotation in radiology reports [11], or for delineation of a single object per image [12].…”
Section: Introductionmentioning
confidence: 99%
“…When tasks are less intrinsically interesting to volunteers, minimally-trained workers can complete tasks for small payments through crowdsourcing platforms such as Amazon's Mechanical Turk (MTurk), and the consensus annotations (across multiple workers or "turkers") can be highly comparable with expert annotations, and sufficiently reliable for use as training data for detection algorithms. (27)(28)(29) Therefore, we hypothesized that consensus from crowdsourced annotations can be used as a substitute for ground truth to tune and benchmark spot-calling algorithms. However, there are no published in situ transcriptomics pipelines that can incorporate ground truth from crowdsourced annotations.…”
Section: Introductionmentioning
confidence: 99%
“…Categories may be either nominal [11, 18, 19], existing in name only, or ordinal, referring to a position in an ordered series or on a gradient [15, 16, 19]. Classification is often labor-intensive and prone to human bias, which can increase with task complexity and time requirement [20, 21]. Alternative scoring approaches have relied on morphometrics and machine learning to automate classification; for example, sorting fruit into shape categories in both tomato [11] and strawberry [18].…”
Section: Introductionmentioning
confidence: 99%