2018
DOI: 10.1109/tpami.2017.2777967
|View full text |Cite
|
Sign up to set email alerts
|

Clickstream Analysis for Crowd-Based Object Segmentation with Confidence

Abstract: Abstract-With the rapidly increasing interest in machine learning based solutions for automatic image annotation, the availability of reference annotations for algorithm training is one of the major bottlenecks in the field. Crowdsourcing has evolved as a valuable option for low-cost and large-scale data annotation; however, quality control remains a major issue which needs to be addressed. To our knowledge, we are the first to analyze the annotation process to improve crowd-sourced image segmentation. Our met… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
6
1

Relationship

3
4

Authors

Journals

citations
Cited by 13 publications
(9 citation statements)
references
References 53 publications
0
9
0
Order By: Relevance
“…-DS COCO L: All 2,818 images of cats from the COCO dataset [14] and the corresponding segmentations. We chose cats as target class because unlike other classes of the COCO dataset, the corresponding images do not suffer from poor references or ambiguities [10] and have the target object in the foreground (similar to medical instruments in endoscopic data). Note, however, that the color distribution of DS COCO L is comparable to that of the whole COCO dataset.…”
Section: Methodsmentioning
confidence: 99%
“…-DS COCO L: All 2,818 images of cats from the COCO dataset [14] and the corresponding segmentations. We chose cats as target class because unlike other classes of the COCO dataset, the corresponding images do not suffer from poor references or ambiguities [10] and have the target object in the foreground (similar to medical instruments in endoscopic data). Note, however, that the color distribution of DS COCO L is comparable to that of the whole COCO dataset.…”
Section: Methodsmentioning
confidence: 99%
“…[30]. [40] is one of the few studies that correlated the mouse dynamics and clicks stream data with annotation quality in crowdsourced image segmentation. In that study, a regression model was trained to estimate the quality of annotations with respect to the features extracted from the clicks stream, i.e.…”
Section: Human Annotator Behavior Analysismentioning
confidence: 99%
“…The model has been applied to several hundreds liver tumor patients, and is currently being extended to applications in renal surgery, including intraoperative process models based on LapOntoSPM. In this context, new methods for large-scale medical data annotation based on crowdsourcing have been developed [25,40,41]. An implementation of the system is publicly available [69].…”
Section: Ontology Development In Heidelbergmentioning
confidence: 99%