2018
DOI: 10.1016/j.cels.2018.05.017
|View full text |Cite
|
Sign up to set email alerts
|

Generalizable and Scalable Visualization of Single-Cell Data Using Neural Networks

Abstract: Visualization algorithms are fundamental tools for interpreting single-cell data. However, standard methods, such as t-stochastic neighbor embedding (t-SNE), are not scalable to datasets with millions of cells and the resulting visualizations cannot be generalized to analyze new datasets. Here we introduce net-SNE, a generalizable visualization approach that trains a neural network to learn a mapping function from high-dimensional single-cell gene-expression profiles to a low-dimensional visualization. We benc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
35
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 45 publications
(35 citation statements)
references
References 42 publications
(54 reference statements)
0
35
0
Order By: Relevance
“…One potential advantage of this approach is that the 'most appropriate' perplexity does not need to grow with the sample size, as long as the mini-batch size remains constant. Parametric t-SNE has been recently applied to transcriptomic data under the names net-SNE (Cho et al, 2018) and scvis (Ding et al, 2018). The latter method combined parametric t-SNE with a variational autoencoder, and was claimed to yield more interpretable visualisations than standard t-SNE due to better preserving the global structure.…”
Section: Comparison To Related Workmentioning
confidence: 99%
“…One potential advantage of this approach is that the 'most appropriate' perplexity does not need to grow with the sample size, as long as the mini-batch size remains constant. Parametric t-SNE has been recently applied to transcriptomic data under the names net-SNE (Cho et al, 2018) and scvis (Ding et al, 2018). The latter method combined parametric t-SNE with a variational autoencoder, and was claimed to yield more interpretable visualisations than standard t-SNE due to better preserving the global structure.…”
Section: Comparison To Related Workmentioning
confidence: 99%
“…Several attempts to successfully apply t-SNE-like methods to massive datasets have been recently reported including aforenoted HSNE 7,33,34 , LargeVis 10 and net-SNE 35 . However, these improved methods, when applied to large datasets, often require/benefit from considerable computational resources; for instance, the LargeVis study was performed on a 512Gb RAM, 32 core station.…”
Section: Discussionmentioning
confidence: 99%
“…Finally, in addition to visualization of single cell profiles using either t-SNE 23 , FIt-SNE 24 , UMAP 25 (Methods), or a force directed layout embedding (FLE 28 ) of the diffusion pseudotime map (Methods), we also include a deep-learning-based visualization technique that speeds up a generalized set of these and similar visualization algorithms (Methods). Inspired by net-SNE 48 , this technique is based on the assumption that large datasets are often redundant and their global structure can be captured using only a portion of the data. It thus first subsamples a fraction of cells according to each cell's local density, ensuring higher rate of sampling from rare and sparse clusters, and then embeds the subsampled cells using the embedding algorithm of interest, such as UMAP ( Fig.…”
Section: Hnsw Has a Near Optimal Recall (Supplementarymentioning
confidence: 99%