2022
DOI: 10.1101/2022.11.19.22282519
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Self-Supervised Pretraining Enables High-Performance Chest X-Ray Interpretation Across Clinical Distributions

Abstract: Chest X-rays (CXRs) are a rich source of information for physicians, essential for disease diagnosis and treatment selection. Recent deep learning models aim to alleviate strain on medical resources and improve patient care by automating the detection of diseases from CXRs. However, shortages of labeled CXRs can pose a serious challenge when training models. Currently, models are generally pretrained on ImageNet, but they often need to then be finetuned on hundreds of thousands of labeled CXRs to achieve high … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 41 publications
(38 reference statements)
0
1
0
Order By: Relevance
“…Given the broad range of data used to train these models, the performance of foundation models are often more robust than with conventional convolutional neural networks 14,15 . In biomedical applications, foundation models have been developed to organize biological [16][17][18] and medical 19 datasets, including modality-specific models for chest X-rays, retinal imaging, wearable waveforms and pathology images [20][21][22][23][24][25] . Training of foundation models on medical imaging has been bottlenecked by dataset size and is often limited to publicly available data that may not represent the range of disease severities and possible presentations.…”
Section: Articlementioning
confidence: 99%
“…Given the broad range of data used to train these models, the performance of foundation models are often more robust than with conventional convolutional neural networks 14,15 . In biomedical applications, foundation models have been developed to organize biological [16][17][18] and medical 19 datasets, including modality-specific models for chest X-rays, retinal imaging, wearable waveforms and pathology images [20][21][22][23][24][25] . Training of foundation models on medical imaging has been bottlenecked by dataset size and is often limited to publicly available data that may not represent the range of disease severities and possible presentations.…”
Section: Articlementioning
confidence: 99%