2022
DOI: 10.1371/journal.pone.0269570
|View full text |Cite
|
Sign up to set email alerts
|

Pathway importance by graph convolutional network and Shapley additive explanations in gene expression phenotype of diffuse large B-cell lymphoma

Abstract: Deep learning techniques have recently been applied to analyze associations between gene expression data and disease phenotypes. However, there are concerns regarding the black box problem: it is difficult to interpret why the prediction results are obtained using deep learning models from model parameters. New methods have been proposed for interpreting deep learning model predictions but have not been applied to genetics. In this study, we demonstrated that applying SHapley Additive exPlanations (SHAP) to a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 46 publications
0
3
0
Order By: Relevance
“…SHapley Additive exPlanations (SHAP) [4] is another state-of-the-art explainability technique. Fast approximations of SHAP have been applied to analyse gene expression data [7,8,9,10,11], such as kernelExplainer, treeExplainer and gradientExplainer [12]. Yu et al use a deep autoencoder [9] to learn gene expression representations, applying treeExplainer SHAP to measure the contributions of genes to each of the latent variables.…”
Section: Applications Of Shapmentioning
confidence: 99%
See 1 more Smart Citation
“…SHapley Additive exPlanations (SHAP) [4] is another state-of-the-art explainability technique. Fast approximations of SHAP have been applied to analyse gene expression data [7,8,9,10,11], such as kernelExplainer, treeExplainer and gradientExplainer [12]. Yu et al use a deep autoencoder [9] to learn gene expression representations, applying treeExplainer SHAP to measure the contributions of genes to each of the latent variables.…”
Section: Applications Of Shapmentioning
confidence: 99%
“…The ability to generate local explanations, for example, using waterfall plots, may be useful for applications in personalised medicine. Compared to state-of-the-art work, which tends to apply SHAP to black-box neural networks [7,8,9,10], we apply SHAP using a probabilistic model derived from a GMM that captures disease subtype relationships. This further improves the interpretability of SHAP values.…”
Section: Summary Plotmentioning
confidence: 99%
“…Deep learning models could provide precise trait predictions directly from www.nature.com/scientificreports/ biological sequence data. By evaluating the significance of each nucleotide or amino acid feature, these models can also be used to link specific loci to the traits of interest 69,70 .…”
Section: Anna16 As An Explanatory Deep-learning Model For Dnamentioning
confidence: 99%