2020
DOI: 10.48550/arxiv.2006.10511
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Contrastive learning of global and local features for medical image segmentation with limited annotations

Abstract: A key requirement for the success of supervised deep learning is a large labeled dataset -a condition that is difficult to meet in medical image analysis. Selfsupervised learning (SSL) can help in this regard by providing a strategy to pre-train a neural network with unlabeled data, followed by fine-tuning for a downstream task with limited annotations. Contrastive learning, a particular variant of SSL, is a powerful technique for learning image-level representations. In this work, we propose strategies for ex… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
73
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 36 publications
(74 citation statements)
references
References 53 publications
(112 reference statements)
1
73
0
Order By: Relevance
“…There are various efforts studying contrastive method to learn meaningful representations. For visual information, contrastive framework is applied for tasks such as image classification [29,30], object detection [31][32][33], image segmentaion [34][35][36], etc. Other applications different from image include adversarial training [37][38][39], graph [40][41][42][43], and sequence modeling [44][45][46].…”
Section: Contrastive Framework and Sampling Techniquesmentioning
confidence: 99%
See 1 more Smart Citation
“…There are various efforts studying contrastive method to learn meaningful representations. For visual information, contrastive framework is applied for tasks such as image classification [29,30], object detection [31][32][33], image segmentaion [34][35][36], etc. Other applications different from image include adversarial training [37][38][39], graph [40][41][42][43], and sequence modeling [44][45][46].…”
Section: Contrastive Framework and Sampling Techniquesmentioning
confidence: 99%
“…The full proof of (4) can be found in the Appendix. Previous works [39,35,40,29,20,50] consider the positive and negative sample equally likely as setting β = 1. In this paper, we leverage different values of β to guide the model concentration on the sample which is distinct from the input.…”
Section: Contrastive Objective Derivationmentioning
confidence: 99%
“…In the medical domain, popular techniques to leverage unlabelled samples include transfer learning from a distant or related task, and self-training with automatically generated labels [7]. Other techniques to leverage unlabelled samples include contrastive learning [8]- [10] and self-supervised representation learning [11]. These techniques either pre-train without labels, or directly use model predictions as true labels.…”
Section: Introductionmentioning
confidence: 99%
“…Nowadays, contrastive representation learning has been widely applied and outstandingly successful in medical image analysis [51,33,6]. The goal of contrastive learning is to learn invariant representations via contrasting medical image pairs, which can be regarded as an implicit way to preserve maximal information.…”
Section: Introductionmentioning
confidence: 99%