2022
DOI: 10.3390/cancers14235778
|View full text |Cite
|
Sign up to set email alerts
|

Contrastive Multiple Instance Learning: An Unsupervised Framework for Learning Slide-Level Representations of Whole Slide Histopathology Images without Labels

Abstract: Recent methods in computational pathology have trended towards semi- and weakly-supervised methods requiring only slide-level labels. Yet, even slide-level labels may be absent or irrelevant to the application of interest, such as in clinical trials. Hence, we present a fully unsupervised method to learn meaningful, compact representations of WSIs. Our method initially trains a tile-wise encoder using SimCLR, from which subsets of tile-wise embeddings are extracted and fused via an attention-based multiple-ins… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
10
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2
1
1

Relationship

1
9

Authors

Journals

citations
Cited by 17 publications
(13 citation statements)
references
References 86 publications
1
10
0
Order By: Relevance
“…At the moment, backbones pretrained on Medical Images perform poorly when compared to those pretrained of ImageNet-1K or ImageNet-22K. This confirms what recent works have found [22,7]. One possible explanation for this is the fact that the best performing available models have been trained in an unsupervised manner for a long time on large GPU clusters [5], whereas those pretrained on Medical Images had access to more limited ressources and less diverse data [7].…”
Section: Backbone Selectionsupporting
confidence: 78%
“…At the moment, backbones pretrained on Medical Images perform poorly when compared to those pretrained of ImageNet-1K or ImageNet-22K. This confirms what recent works have found [22,7]. One possible explanation for this is the fact that the best performing available models have been trained in an unsupervised manner for a long time on large GPU clusters [5], whereas those pretrained on Medical Images had access to more limited ressources and less diverse data [7].…”
Section: Backbone Selectionsupporting
confidence: 78%
“…Then, a MIL model, MI-LR, was used to predict the patient-level positive probability with the cell vectors belonging to the sample. For the samples of exfoliated margin cells collected from 27 , and selfsupervised CL does not require exogenous labels 54,55 . MICLEAR utilized self-supervised SimCLR for cell embedding in consideration of high intercellular heterogeneity occurring in the positive samples of exfoliated cells, which may introduce huge label noise when using the patient-level labels directly.…”
Section: Discussionmentioning
confidence: 99%
“…Indeed, in study [ 32 ], it was shown how a VGG-16 convolutional neural network outperformed (accuracy: 0.841, AUC: 0.903) classical machine learning methods in differentiating between the wo NSCLC subtypes. Furthermore, in study [ 61 ], a self-supervised learning approach reached an AUC that was equal to 0.8641.…”
Section: Limitationmentioning
confidence: 99%