2021
DOI: 10.48550/arxiv.2112.05760
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning Representations with Contrastive Self-Supervised Learning for Histopathology Applications

Abstract: Unsupervised learning has made substantial progress over the last few years, especially by means of contrastive self-supervised learning. The dominating dataset for benchmarking self-supervised learning has been ImageNet, for which recent methods are approaching the performance achieved by fully supervised training. The ImageNet dataset is however largely object-centric, and it is not clear yet what potential those methods have on widely different datasets and tasks that are not object-centric, such as in digi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
12
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(14 citation statements)
references
References 23 publications
2
12
0
Order By: Relevance
“…Finally, they showed that combining image patches from different sites did not improve downstream performance and was comparable to single-site performance. The same conclusions were drawn earlier by Stacke et al [ 23 ], who applied SimCLR to patch-wise breast cancer and skin cancer classification. They also showed that the set of optimal transformations changed depending on the dataset utilized.…”
Section: Related Worksupporting
confidence: 85%
See 2 more Smart Citations
“…Finally, they showed that combining image patches from different sites did not improve downstream performance and was comparable to single-site performance. The same conclusions were drawn earlier by Stacke et al [ 23 ], who applied SimCLR to patch-wise breast cancer and skin cancer classification. They also showed that the set of optimal transformations changed depending on the dataset utilized.…”
Section: Related Worksupporting
confidence: 85%
“…These works define pretext tasks from which patch-wise feature representations are learned. Such pretext tasks include contrastive predictive coding [ 21 ], contrastive learning on adjacent image patches [ 22 ], contrastive learning using SimCLR [ 23 , 24 , 25 ], and SimSiam [ 26 ] with an additional stop-gradient for adjacent patches [ 27 ]. Many methods utilize generic features derived from ImageNet as their patch-wise feature representations [ 17 , 18 , 28 ].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…It is not easy to extract discriminative feature. Second, the category number of pathological images are typical few, e.g., 2 or 4 [19,26,3]. It is limited compared to natural image understanding tasks, which restrict the learning effectiveness.…”
Section: Introductionmentioning
confidence: 99%
“…Most recently, transformers and graph neural networks that are intrinsically trained with the correlation information between different tiles along with the tile images have been proposed [134,[141][142][143]. Another approach that is becoming more and more common in DL systems in histopathology is contrastive SSL, a subset of unsupervised learning [56,57,144,145]. In contrastive self-supervised training, the model learns the patterns in a dataset in the absence of any labels by contrasting different images and rewarding similar images; therefore, it is aimed to obtain better representations of images [146,147].…”
Section: Transition Towards New Technologiesmentioning
confidence: 99%