2021
DOI: 10.1093/bib/bbab315
|View full text |Cite
|
Sign up to set email alerts
|

XOmiVAE: an interpretable deep learning model for cancer classification using high-dimensional omics data

Abstract: The lack of explainability is one of the most prominent disadvantages of deep learning applications in omics. This ‘black box’ problem can undermine the credibility and limit the practical implementation of biomedical deep learning models. Here we present XOmiVAE, a variational autoencoder (VAE)-based interpretable deep learning model for cancer classification using high-dimensional omics data. XOmiVAE is capable of revealing the contribution of each gene and latent dimension for each classification prediction… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
30
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
2

Relationship

0
10

Authors

Journals

citations
Cited by 48 publications
(37 citation statements)
references
References 39 publications
0
30
0
Order By: Relevance
“…Table 1 provides a summary of the distribution of the examples in the dataset after the splitting before the data augmentation step. The MOT model metric scores are presented in the table 2 alongside with metric scores from OmiVAE (26), OmiEmbed (35), XOmiVAE (36) and Gene-Transfomer (37). OmiEmbed is an extension of OmiVAE that integrated a multi-task aspect to the original model previously introduced.…”
Section: Resultsmentioning
confidence: 99%
“…Table 1 provides a summary of the distribution of the examples in the dataset after the splitting before the data augmentation step. The MOT model metric scores are presented in the table 2 alongside with metric scores from OmiVAE (26), OmiEmbed (35), XOmiVAE (36) and Gene-Transfomer (37). OmiEmbed is an extension of OmiVAE that integrated a multi-task aspect to the original model previously introduced.…”
Section: Resultsmentioning
confidence: 99%
“…Deep learning has become the mainstream of machine learning, and with the expansion of its applications, technical problems from the perspective of real-world problem solving have also emerged, primarily the black box problem: neural network machinelearning in deep learning is a black box [130][131][132][133][134][135][136][137][138][139][140][141][142][143][144]. The learning results are reflected in the node weights, and the obtained regularities and models are not represented in a form that humans can directly understand.…”
Section: Black Box Problemmentioning
confidence: 99%
“…In [ 165 ], a VAE for classifying cancer from the gene expression profiles was modified to identify which genes contributed to the classification. As the VAE learned to reconstruct the data, a classifying neural network that was connected to the bottleneck layer learned the mapping between the input data and the labels.…”
Section: Challengesmentioning
confidence: 99%