2019
DOI: 10.1007/s10278-019-00299-9
|View full text |Cite
|
Sign up to set email alerts
|

Crowdsourcing pneumothorax annotations using machine learning annotations on the NIH chest X-ray dataset

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
24
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
3
1

Relationship

1
9

Authors

Journals

citations
Cited by 38 publications
(24 citation statements)
references
References 14 publications
0
24
0
Order By: Relevance
“…These X-ray images have text-mined labels with 14 common thorax disease extracted from relevant radiological reports 21 . These labels were obtained through natural language processing and are inherently inaccurate 22 . For example, there are 5302 X-ray images labeled as pneumothorax, among which are a mixture of images with and without pneumothorax.…”
Section: Methodsmentioning
confidence: 99%
“…These X-ray images have text-mined labels with 14 common thorax disease extracted from relevant radiological reports 21 . These labels were obtained through natural language processing and are inherently inaccurate 22 . For example, there are 5302 X-ray images labeled as pneumothorax, among which are a mixture of images with and without pneumothorax.…”
Section: Methodsmentioning
confidence: 99%
“…However, large volumes of chest radiographs (CXR) in routine clinical environment may yield longer turnaround times for radiology reporting which can delay urgent treatment; this issue as well as latent critical findings can be potentially addressed by the use of artificial intelligence (AI)-assisted reporting or an AI-based image triage. Several AI algorithms, trained on publicly available datasets, have demonstrated potential to detect PTX in CXRs with diagnostic accuracies that have been quantified by area under receiver operating characteristics (AUROCs) of up to 0.937 [8][9][10][11][12][13]. In studies evaluating these algorithms, the performance was evaluated on data derived from public datasets [8,14,15].…”
Section: Introductionmentioning
confidence: 99%
“…Various studies have shown the utility of crowdsourcing and citizen science in biological and medical image annotation [38][39][40][41]. Crowdsourcing for annotation and evaluation is advantageous because it is scalable, high throughput, cost-efficient, and accurate [42][43][44].…”
Section: Discussionmentioning
confidence: 99%