2021
DOI: 10.1101/2021.11.24.469947
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Annotation of Spatially Resolved Single-cell Data with STELLAR

Abstract: Spatial protein and RNA imaging technologies have been gaining rapid attention but current computational methods for annotating cells are based on techniques established for dissociated single-cell technologies and thus do not take spatial organization into account. Here we present STELLAR, a geometric deep learning method that utilizes spatial and molecular cell information to automatically assign cell types from an annotated reference set as well as discover new cell types and cell states. STELLAR transfers … Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 12 publications
(8 citation statements)
references
References 50 publications
0
5
0
Order By: Relevance
“…Our method is generally applicable to images of cells in their native tissue context collected via highly multiplexed single-cell imaging data such as codetection by indexing (CODEX), cyclic immunofluorescence (CyCIF), imaging mass cytometry (IMC), multiplexed ion beam imaging (MIBI) and likewise multiplexed spatial platforms. The central aspect of UTAG is the combination of two employed deep learning of graphs of cellular proximity with cellular phenotypes for cell type prediction 19 , inference of cellular communication 20 and data exploration 21 . These models are computationally expensive to train and their results heavily depend on training data, which may preclude joint analysis of expression and morphological features across studies and data types.…”
Section: Unsupervised Discovery Of Tissue Architecture With Graphsmentioning
confidence: 99%
See 1 more Smart Citation
“…Our method is generally applicable to images of cells in their native tissue context collected via highly multiplexed single-cell imaging data such as codetection by indexing (CODEX), cyclic immunofluorescence (CyCIF), imaging mass cytometry (IMC), multiplexed ion beam imaging (MIBI) and likewise multiplexed spatial platforms. The central aspect of UTAG is the combination of two employed deep learning of graphs of cellular proximity with cellular phenotypes for cell type prediction 19 , inference of cellular communication 20 and data exploration 21 . These models are computationally expensive to train and their results heavily depend on training data, which may preclude joint analysis of expression and morphological features across studies and data types.…”
Section: Unsupervised Discovery Of Tissue Architecture With Graphsmentioning
confidence: 99%
“…To benchmark UTAG against other methods for high-order tissue structure inference, we ran SpaGene 25 and SpatialLDA 26 on both datasets for which we have ground truth annotation of microanatomical domains. For this purpose, we also reran UTAG using a max_dist of 15 for both datasets, under Leiden clustering resolutions of 0.05, 0.07, 0.1, 0.3, 0.5, 0.8, 1.0 and 2.0, which resulted in 3,5,10,11,14,17,19,25,31 and 55 clusters for the healthy lung data, and 3, 4, 6, 22, 23, 27, 38 and 61 clusters for the UTUC data. We intentionally do not use the interpreted annotations in Fig.…”
Section: Running Spagene and Spatialldamentioning
confidence: 99%
“…The lack of a unified image-cell type dictionary hinders the generalization of trained models on new datasets or unseen cell types. Comprehensive results from human cell consortia efforts 36,37 as well as computational methods that accommodate unseen cell types 19 could potentially be incorporated to overcome these limitations.…”
Section: Discussionmentioning
confidence: 99%
“…There has been increased interest in applying graph-based deep learning methods to spatial cellular structures in recent literature [14][15][16] . Graph neural networks 17,18 (GNNs), a class of deep learning methods designed for graph structures, have been applied to a variety of analysis tasks, including cell type prediction 19 , representation learning 20 , cellular communication modeling 21 and tissue structure detection 22 . As most of these methods are designed for cellular property modeling, there still exists a gap between cellular-level graph analysis and patient-level phenotypes.…”
Section: Introductionmentioning
confidence: 99%
“…Recent studies applying unsupervised deep learning models to histopathological images such as hematoxylin eosin staining have shown that it is possible to extract morphological features that are for example predictive of gene expression 18 . Other studies have also employed deep learning of graphs of cellular proximity with cellular phenotypes for cell type prediction 19 , inference of cellular communication 20 , and data exploration 21 . These models are computationally expensive to train, and their results heavily depend on training data, which may preclude joint analysis of expression and morphological features across studies and data types.…”
Section: Introductionmentioning
confidence: 99%