2021
DOI: 10.48550/arxiv.2112.09645
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Local contrastive loss with pseudo-label based self-training for semi-supervised medical image segmentation

Abstract: Supervised deep learning-based methods yield accurate results for medical image segmentation. However, they require large labeled datasets for this, and obtaining them is a laborious task that requires clinical expertise. Semi/selfsupervised learning-based approaches address this limitation by exploiting unlabeled data along with limited annotated data. Recent self-supervised learning methods use contrastive loss to learn good global level representations from unlabeled images and achieve high performance in c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 78 publications
(134 reference statements)
0
3
0
Order By: Relevance
“…Our work maximizes the MI between similar classes using labeled source images and pseudo-labeled target images. Concurrent to our work, Chaitanya et al [3] proposed an end-to-end semi-segmentation framework by defining a local pixel-level contrastive loss between pseudo-labels of unlabeled sets and limited labeled sets. They randomly sample pixels from each image to address the computational limitations of running CL for all the pixels.…”
Section: Contrastive Learning and Mutual Informationmentioning
confidence: 97%
See 1 more Smart Citation
“…Our work maximizes the MI between similar classes using labeled source images and pseudo-labeled target images. Concurrent to our work, Chaitanya et al [3] proposed an end-to-end semi-segmentation framework by defining a local pixel-level contrastive loss between pseudo-labels of unlabeled sets and limited labeled sets. They randomly sample pixels from each image to address the computational limitations of running CL for all the pixels.…”
Section: Contrastive Learning and Mutual Informationmentioning
confidence: 97%
“…The impact of MI loss dilutes 1 Other approach results reported using best self-implementation 2 Other approach results reported from [27] with reduction in weight, leading to a drop in performance. Further, we study how replacing the average pooling approach with the max-pooling approach or sampling random pixels approach (N=4), as done in [3], impacts performance. Average pooling performs superior to other approaches.…”
Section: Ablation Studiesmentioning
confidence: 99%
“…The past five years have seen tremendous progress related to CL in medical image segmentation [15,16,42,79,81,84,85,87,88,91], and it becomes increasingly important to improve representation in label-scarcity scenarios. The key idea in CL [21,39,41,59] is to learn representations from unlabeled data that obey similarity constraints by pulling augmented views of the same samples closer in a representation space, and pushing apart augmented views of different samples.…”
Section: Contrastive Learningmentioning
confidence: 99%