2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019) 2019
DOI: 10.1109/isbi.2019.8759240
|View full text |Cite
|
Sign up to set email alerts
|

Segmenting The Kidney On CT Scans Via Crowdsourcing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 9 publications
0
6
0
Order By: Relevance
“…The STAPLE algorithm compares segmentations and computes a probabilistic estimate of the true segmentation. For narrowly focused applications such as colonic polyp classification and kidney segmentation, crowdsourcing of labels by nonexperts may be feasible (45,46). Heim et al (47) compared segmentations of the liver performed by nonexperts, engineers with domain knowledge, medical students, and radiologists.…”
Section: Ground Truth or Label Qualitymentioning
confidence: 99%
“…The STAPLE algorithm compares segmentations and computes a probabilistic estimate of the true segmentation. For narrowly focused applications such as colonic polyp classification and kidney segmentation, crowdsourcing of labels by nonexperts may be feasible (45,46). Heim et al (47) compared segmentations of the liver performed by nonexperts, engineers with domain knowledge, medical students, and radiologists.…”
Section: Ground Truth or Label Qualitymentioning
confidence: 99%
“…In the literature, experiences with crowdsourcing medical imaging tasks to untrained persons in the community-at-large have been described with variable success. Such tasks include annotations of airways, lung nodules, kidney and liver segmentations, and colon polyp classification on CT colonography images [18][19][20][21].…”
Section: Discussionmentioning
confidence: 99%
“…Distributing the labeling task among more human labelers reduces the labeling burden on individuals but increases overall labeling work and raises consistency issues that may require averaged or consensus labels among several labelers. Recent experiments have found value in crowdsourced segmentation labels by nonexpert reviewers (10,11). For tasks with abundant imaging data, low-quality labels may be sufficient to train a network.…”
Section: Teaching Pointsmentioning
confidence: 99%