2023
DOI: 10.1016/j.media.2022.102645
|View full text |Cite
|
Sign up to set email alerts
|

RetCCL: Clustering-guided contrastive learning for whole-slide image retrieval

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
71
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 89 publications
(72 citation statements)
references
References 35 publications
1
71
0
Order By: Relevance
“…We chose this model due to its broad use in the computational pathology research literature 14 . The second model is the Retrieval with Clustering-guided Contrastive Learning (RetCCL) 32 model, a Resnet50 backbone that was trained on a pathology dataset with Self Supervised Learning (SSL).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We chose this model due to its broad use in the computational pathology research literature 14 . The second model is the Retrieval with Clustering-guided Contrastive Learning (RetCCL) 32 model, a Resnet50 backbone that was trained on a pathology dataset with Self Supervised Learning (SSL).…”
Section: Methodsmentioning
confidence: 99%
“…8). We then used our in-house open-source DL pipeline (https:// github.com/KatherLab/marugoto) which uses the SSL-trained model RetCCL 32 to obtain 2048 features per tile and uses attMIL to make patient-level predictions 33,34 .…”
Section: Data Acquisition and Experimental Designmentioning
confidence: 99%
“…They showed that their histopathology-specific model outperformed a general-purpose contrastive learning self-supervised model (i.e., MoCo [ 58 ]) on three datasets—tumor metastasis detection, tissue type classification, and tumor cellularity quantification—under annotation-limited settings. Lastly, Wang et al developed a self-supervised method combined with self-attention to learn the patch-level embeddings [ 62 , 63 ], and then performed slide-level image retrieval based on said embeddings [ 64 ].…”
Section: Related Workmentioning
confidence: 99%
“…Secondly, the patches for each cohort were color normalized using the Macenko spectral matching technique (40) to enforce a standardized color distribution across cohorts. To train the prediction models, we used our in-house open-source DL pipeline "marugoto" (https://github.com/KatherLab/marugoto) consisting of a self-supervised learning (SSL) model using a pre-trained ResNet50 architecture with ImageNet weights, fine-tuned pan-cancer on approximately 32.000 WSI to extract a 2048-dimensional feature vector for each patch per patient (41). To obtain patient-level predictions, 512x2048 feature matrices (MIL bags) are constructed by concatenating 512 feature vectors selected at random per patient and fed into an attMIL framework with the following architecture: (512x256), (256x2) with a subsequent attention mechanism (Figure 1B).…”
Section: Image Preprocessingmentioning
confidence: 99%