2020
DOI: 10.1101/2020.02.19.956896
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Interpreting Deep Neural Networks Beyond Attribution Methods: Quantifying Global Importance of Genomic Features

Abstract: Despite deep neural networks (DNNs) having found great success at improving performance on various prediction tasks in computational genomics, it remains difficult to understand why they make any given prediction. In genomics, the main approaches to interpret a high-performing DNN are to visualize learned representations via weight visualizations and attribution methods. While these methods can be informative, each has strong limitations. For instance, attribution methods only uncover the independent contribut… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 34 publications
0
6
0
Order By: Relevance
“…Although recent progress is extending these class of methods to second-order attributions 53,[56][57][58] , they cannot uncover the effect size of motifs on model predictions. Global interpretability analysis via in silico experiments is one avenue that shows great promise in uncovering the importance of whole features 59 .…”
Section: Discussionmentioning
confidence: 99%
“…Although recent progress is extending these class of methods to second-order attributions 53,[56][57][58] , they cannot uncover the effect size of motifs on model predictions. Global interpretability analysis via in silico experiments is one avenue that shows great promise in uncovering the importance of whole features 59 .…”
Section: Discussionmentioning
confidence: 99%
“…For instance, a method can point out certain motifs or associations that are important for the model in predicting the target, but how this reflects actual physicochemical interactions can be rather hard to interpret from the model alone. Nevertheless, this is an active area of research and new solutions are frequently developed ( Lundberg and Lee, 2017 ; Chen and Capra, 2020 ; Koo and Ploenzke, 2020b ), where rigorous testing as well as experimentally verifying predictions will highlight the most promising approaches ( Ancona et al, 2017 ). On the other hand, an alternative trend that is arguably more appropriate than interpreting black box models is the development of inherently interpretable models ( Rudin, 2019 ), where prior knowledge of gene expression can be built into the deep network structure itself ( Ma et al, 2018 ; Tareen and Kinney, 2019 ; Liu et al, 2020 ).…”
Section: Learning the Protein-dna Interactions Initiating Gene Expressionmentioning
confidence: 99%
“…e . global interpretability) remains challenging[25]. Noteworthy, neither approach is transparent as to how the model makes predictions.…”
Section: Introductionmentioning
confidence: 99%