2022
DOI: 10.48550/arxiv.2205.09723
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Robust and Efficient Medical Imaging with Self-Supervision

Abstract: Recent progress in Medical Artificial Intelligence (AI) has delivered systems that can reach clinical expert level performance. However, such systems tend to demonstrate sub-optimal "out-of-distribution" performance when evaluated in clinical settings different from the training environment. A common mitigation strategy is to develop separate systems for each clinical setting using site-specific data [1]. However, this quickly becomes impractical as medical data is time-consuming to acquire and expensive to an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0
2

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 14 publications
(19 citation statements)
references
References 20 publications
0
17
0
2
Order By: Relevance
“…where (e 2 i ) shows that sum of squared residuals and (y i − ȳ) 2 shows total sum squared. R 2 is commonly used in clinical studies to assess how well a model explains and predicts future outcomes [19].…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…where (e 2 i ) shows that sum of squared residuals and (y i − ȳ) 2 shows total sum squared. R 2 is commonly used in clinical studies to assess how well a model explains and predicts future outcomes [19].…”
Section: Resultsmentioning
confidence: 99%
“…While some approaches have designed domainspecific pretext tasks [5,54,68,69]], others have adjusted well-known self-supervised learning methods to medical data [25,30,53,66]. Very recently [2] has applied SimCLR on a combination of unlabeled ImageNet dataset and task specific medical images for medical image classification; their experiments and improved performance suggest that pre-training on ImageNet is complementary to pre-training on unlabeled medical images.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Although we have tried our best to include a diverse set of algorithms and datasets in our benchmark, it is certainly not exhaustive. There are methods to promote fairness from other perspectives, e.g., self-supervised learning may be more robust (Liu et al, 2021;Azizi et al, 2022). Also, datasets from other medical data modalities (e.g., cardiology, digital pathology) should be added.…”
Section: Relation Of Domain Generalization and Fairnessmentioning
confidence: 99%
“…Self-supervised models can be more robust to dataset-level distribution shift [41] and have better transfer learning performance [42] than their supervised counterparts. The benefits of transfer learning using SSL on domain-specific data have been shown for a variety of x-ray and histology slide image tasks [43]. Finally, and possibly the most compelling, is that SSL enables learning with much more abundant unlabeled data, addressing the data scarcity challenge directly.…”
mentioning
confidence: 99%