2022
DOI: 10.1148/radiol.212482
|View full text |Cite
|
Sign up to set email alerts
|

Simplified Transfer Learning for Chest Radiography Models Using Less Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(7 citation statements)
references
References 14 publications
0
6
0
Order By: Relevance
“…Although more research is necessary, self-supervised pretraining is a promising approach to image interpretation tasks, especially for tasks where datasets are small but contain clear, specific labels. A recent study found that pretraining the data on natural images, followed by pretraining on large weakly labeled CXR datasets, and finally, task-specific training on small labeled datasets could significantly reduce label requirements on the target dataset 42 ; thus the composition of large-scale out-of-domain pretraining and self-supervised pre training may be a fruitful direction for future work.…”
Section: Discussionmentioning
confidence: 99%
“…Although more research is necessary, self-supervised pretraining is a promising approach to image interpretation tasks, especially for tasks where datasets are small but contain clear, specific labels. A recent study found that pretraining the data on natural images, followed by pretraining on large weakly labeled CXR datasets, and finally, task-specific training on small labeled datasets could significantly reduce label requirements on the target dataset 42 ; thus the composition of large-scale out-of-domain pretraining and self-supervised pre training may be a fruitful direction for future work.…”
Section: Discussionmentioning
confidence: 99%
“…The models used in this study were pretrained using contrastive self-supervised learning (CSL), which has been shown to perform better than ImageNet pretraining for CXR classification 27 . Specifically, an approach called MoCo-CXR 28 was used; it is an adaptation of the momentum contrast (MoCo) 29 approach for use with CXR data.…”
Section: Methodsmentioning
confidence: 99%
“…The models used in this study were pretrained using contrastive self-supervised learning (CSL), which has been shown to perform better than ImageNet pretraining for CXR classification. 27 Specifically, an approach called MoCo-CXR 28 was used; it is an adaptation of the momentum contrast (MoCo) 29 approach for use with CXR data. MoCo maximizes the agreement between positive pairs of images (an image and augmentations thereof) while minimizing the agreement between negative pairs (any other pair of images).…”
Section: Model Developmentmentioning
confidence: 99%
“…In this sense, Foundation Models training is an approach that trades off the need for task-specific data with the need for large amounts of data at pretraining. This is leveraged in hierarchical self-supervised pretraining which consists of a sequence of self-supervised training steps on decreasing amounts of increasingly task-relevant data, so as to tune the trade-off between data quantity and quality in ways that best match the data availability [10], [11].…”
Section: Data Requirementsmentioning
confidence: 99%