2021
DOI: 10.1186/s12859-021-04210-8
|View full text |Cite
|
Sign up to set email alerts
|

Contrastive self-supervised clustering of scRNA-seq data

Abstract: Background Single-cell RNA sequencing (scRNA-seq) has emerged has a main strategy to study transcriptional activity at the cellular level. Clustering analysis is routinely performed on scRNA-seq data to explore, recognize or discover underlying cell identities. The high dimensionality of scRNA-seq data and its significant sparsity accentuated by frequent dropout events, introducing false zero count observations, make the clustering analysis computationally challenging. Even though multiple scRN… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
42
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 41 publications
(42 citation statements)
references
References 45 publications
0
42
0
Order By: Relevance
“…Momentum contrastive self-supervised learning achieved comparable performance in visual representation learning of images as compared with supervised representation learning ( Chen et al., 2020b ; He et al., 2019 ). As compared with a similar method proposed by Ciotran and colleagues ( Ciortan and Defrance, 2021 ) used a network of 3 linear layers as feature encoder, we observed that Miscell achieved better performance ( Figure S6 ). This is probably because of better representation capacity of the feature encoder used by Miscell.…”
Section: Discussionmentioning
confidence: 53%
“…Momentum contrastive self-supervised learning achieved comparable performance in visual representation learning of images as compared with supervised representation learning ( Chen et al., 2020b ; He et al., 2019 ). As compared with a similar method proposed by Ciotran and colleagues ( Ciortan and Defrance, 2021 ) used a network of 3 linear layers as feature encoder, we observed that Miscell achieved better performance ( Figure S6 ). This is probably because of better representation capacity of the feature encoder used by Miscell.…”
Section: Discussionmentioning
confidence: 53%
“…Most metrics, however, require the ground truth labelling, which were not available in this study. Besides, the clustering itself can be approached in many different ways, using the classical or the newly developed deep-learning based algorithms (Ciortan and Defrance, 2021). In this study, we only intended to fairly compare clustering results, obtained under identical conditions (same algorithm, grid search parameters, evaluation metrics, etc.)…”
Section: Appendix E Discussionmentioning
confidence: 99%
“…Finally, self-supervision has been successfully applied to cell segmentation, annotation and clustering (Lu et al, 2019;Santos-Pata et al, 2021). Most recently, a self-supervised contrastive learning framework has been proposed by Ciortan and Defrance (2021) to learn representations of scRNA-seq data. The authors follow the idea of SimCLR (Chen et al, 2020) and show state-of-the-art (SOTA) performance on clustering task.…”
Section: Appendix a Related Workmentioning
confidence: 99%
“…Following its success in computer vision, this strategy has been adopted in several applications in other research fields including classification of electrocardiograms 31 and clustering of scRNA-seq data. 32 It has been demonstrated that the development of modality-specific data augmentation is critical to the performance of models trained using contrastive learning.…”
Section: Introductionmentioning
confidence: 99%