2022
DOI: 10.1007/s00439-021-02417-6
|View full text |Cite
|
Sign up to set email alerts
|

Interpretable generative deep learning: an illustration with single cell gene expression data

Abstract: Deep generative models can learn the underlying structure, such as pathways or gene programs, from omics data. We provide an introduction as well as an overview of such techniques, specifically illustrating their use with single-cell gene expression data. For example, the low dimensional latent representations offered by various approaches, such as variational auto-encoders, are useful to get a better understanding of the relations between observed gene expressions and experimental factors or phenotypes. Furth… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

3
3

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 96 publications
(149 reference statements)
0
5
0
Order By: Relevance
“…Interpretability is an aspect that is of great importance for the application of DGMs [50]. Some of the models we have reviewed already offer the possibility of making the corresponding outputs interpretable for users.…”
Section: Outlook/discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Interpretability is an aspect that is of great importance for the application of DGMs [50]. Some of the models we have reviewed already offer the possibility of making the corresponding outputs interpretable for users.…”
Section: Outlook/discussionmentioning
confidence: 99%
“…The Spearman correlations calculated between each latent variable and the features of each modality then allow relevant features to be identified. Additionally, by using a Laplace prior, scMM learns disentangled representations, with correlations between latent variables being penalized, which allows for better interpretation of individual features [55].…”
Section: Approaches For Paired Datamentioning
confidence: 99%
“…Interpretability is an aspect that is of great importance for the application of DGMs ( Treppner et al, 2022 ). Some of the models we have reviewed already offer the possibility of making the corresponding outputs interpretable for users.…”
Section: Outlook and Discussionmentioning
confidence: 99%
“…The Spearman correlations calculated between each latent variable and the features of each modality then allow relevant features to be identified. Additionally, by using a Laplace prior, scMM learns disentangled representations, with correlations between latent variables being penalized, which allows for better interpretation of individual features ( Treppner et al, 2022 ).…”
Section: Literature Reviewmentioning
confidence: 99%
“…While multiple layers for dimension reduction allow for more flexibility in the learning task, which can help to construct a well-structured embedding of the data in a latent space, it is particularly difficult to determine the most explanatory genes for the learned patterns in different latent dimensions. Therefore, neural network-based approaches often rely on post-hoc feature attribution to link groups of genes to specific latent dimensions [31,32,33], where an additional analysis step is applied to an already trained model (e.g. [34,35]).…”
Section: Introductionmentioning
confidence: 99%