2021
DOI: 10.1007/s00439-021-02387-9
|View full text |Cite
|
Sign up to set email alerts
|

Interpretable machine learning for genomics

Abstract: High-throughput technologies such as next-generation sequencing allow biologists to observe cell function with unprecedented resolution, but the resulting datasets are too large and complicated for humans to understand without the aid of advanced statistical methods. Machine learning (ML) algorithms, which are designed to automatically find patterns in data, are well suited to this task. Yet these models are often so complex as to be opaque, leaving researchers with few clues about underlying mechanisms. Inter… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
2
2

Relationship

0
10

Authors

Journals

citations
Cited by 28 publications
(22 citation statements)
references
References 112 publications
0
19
0
Order By: Relevance
“…Interpretable, or explainable, machine learning can fill this important gap ( 75 , 76 ). Interpretable machine learning is particularly important in bioinformatics, since explaining a model’s predictions is critical to justify making high-stakes clinical or research decisions based on machine learning predictions ( 77 , 78 ). Accordingly, developers of deep learning approaches to SARS-CoV-2 should consider providing some functionality to interpret or explain predictions.…”
Section: Looking Inside the Deep Learning Black Boxmentioning
confidence: 99%
“…Interpretable, or explainable, machine learning can fill this important gap ( 75 , 76 ). Interpretable machine learning is particularly important in bioinformatics, since explaining a model’s predictions is critical to justify making high-stakes clinical or research decisions based on machine learning predictions ( 77 , 78 ). Accordingly, developers of deep learning approaches to SARS-CoV-2 should consider providing some functionality to interpret or explain predictions.…”
Section: Looking Inside the Deep Learning Black Boxmentioning
confidence: 99%
“…While neural networks excel at fitting data, they suffer from a "black box" problem: it is very hard to explain a neural network's predictions. 25 We add two layers to not only help classify data but also allow us to visualize and potentially interpret the models that we train. The first is a forward attention layer 26 , based on a structure used to analyze text 27 that can also be applied to biological sequences.…”
Section: Model Designmentioning
confidence: 99%
“…With the complexity of machine learning algorithms, interpretability does not come directly from the algorithm itself, but instead through an explanation model. For this, Shapley values have been proposed (Lundberg & Lee, 2017) which are 'model agnostic': that is, applicable to machine learning models, regardless of the algorithm used (Watson, 2021). Shapley values are fully 'additive' too, meaning that they possess all the properties relevant to additive feature attribution, and so completely "attribute[s] an effect to each feature and, [by] summing the effects of all feature attributions, approximates the output of the original [analytic] model" (Lundberg & Lee, 2017, p. 2).…”
Section: Machine Learning For Online Learning Analysismentioning
confidence: 99%