2020
DOI: 10.1101/2020.09.04.283812
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

BayesSpace enables the robust characterization of spatial gene expression architecture in tissue sections at increased resolution

Abstract: Recently developed spatial gene expression technologies such as the Spatial Transcriptomics and Visium platforms allow for comprehensive measurement of transcriptomic profiles while retaining spatial context. However, existing methods for analyzing spatial gene expression data often do not efficiently leverage the spatial information and fail to address the limited resolution of the technology. Here, we introduce BayesSpace, a fully Bayesian statistical method for clustering analysis and resolution enhancement… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
22
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(22 citation statements)
references
References 35 publications
0
22
0
Order By: Relevance
“…For example, using the customized website spatial.libd.org/spatialLIBD, researchers can perform spatial registration of their scRNA-seq clusters. This data can also be used for developing new analytical methods (Biancalani et al, 2020;Zhao et al, 2020).…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, using the customized website spatial.libd.org/spatialLIBD, researchers can perform spatial registration of their scRNA-seq clusters. This data can also be used for developing new analytical methods (Biancalani et al, 2020;Zhao et al, 2020).…”
Section: Resultsmentioning
confidence: 99%
“…However, Loupe and Giotto currently do not support visualizing more than one tissue section at a time, which is useful for comparing replicates and annotating observed spots on the Visium array across samples. Furthermore, while unsupervised clustering methods are widely developed (Zhao et al, 2020;Biancalani et al, 2020), manual annotation of spots using known marker genes is important, as well as cross-sample dimension reduction techniques such as UMAP and t-SNE.…”
Section: Introductionmentioning
confidence: 99%
“…Further integration of spatial datasets with transcriptomics and other single-cell data modalities will uncover an unprecedented level of understanding of the composition and function of the organism (92). The rapid proliferation of single-cell omics has greatly improved the understanding of heterogeneity within cell populations, and any serious approach that hopes to advance this knowledge must leverage all the tools available, including new statistical methods that can efficiently leverage diverse sources of information (93).…”
Section: Reasoning For Multimodal Approachesmentioning
confidence: 99%
“…In the presence of spatial coordinates for each transcriptome profiled spot, spatial clustering methods achieve better classification accuracy. For example, Zhao et al [2020] showed that BayesSpace improves resolution and achieves better classification accuracy in manually-annotated human brain samples. However, these existing spatial clustering methods have certain limitations.…”
Section: Introductionmentioning
confidence: 99%
“…To address all these limitations, we propose a Spatial Clustering using the hidden Markov random field based on Empirical Bayes (SC-MEB) to model a low-dimensional representation of the gene expression matrix incorporating the spatial coordinates for each measurement. Compared with existing methods [Dries et al, 2019, Zhao et al, 2020, SC-MEB is not only computationally e cient and scalable to the sample size increment, but also is capable of choosing the smoothness parameter and the number of clusters as well. We derive an e cient expectation-maximization (EM) algorithm based on iterative conditional mode (ICM) and further choose the number of clusters for SC-MEB with the Bayesian information criterion (BIC) [Claeskens et al, 2008].…”
Section: Introductionmentioning
confidence: 99%