2023
DOI: 10.1002/widm.1493
|View full text |Cite
|
Sign up to set email alerts
|

Interpretable and explainable machine learning: A methods‐centric overview with concrete examples

Abstract: Interpretability and explainability are crucial for machine learning (ML) and statistical applications in medicine, economics, law, and natural sciences and form an essential principle for ML model design and development. Although interpretability and explainability have escaped a precise and universal definition, many models and techniques motivated by these properties have been developed over the last 30 years, with the focus currently shifting toward deep learning. We will consider concrete examples of stat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 27 publications
(12 citation statements)
references
References 160 publications
(290 reference statements)
0
10
0
Order By: Relevance
“…Deep learning scoring functions, while powerful, inherently have a black-box nature, making them challenging to interpret. Trusting deep learning models becomes even more complicated without an in-depth understanding of the underlying neural network mechanics . To assess whether the model learns important features aligned with physical principles, we use an approach that masks the bonds between the ligand and protein atoms.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Deep learning scoring functions, while powerful, inherently have a black-box nature, making them challenging to interpret. Trusting deep learning models becomes even more complicated without an in-depth understanding of the underlying neural network mechanics . To assess whether the model learns important features aligned with physical principles, we use an approach that masks the bonds between the ligand and protein atoms.…”
Section: Resultsmentioning
confidence: 99%
“…Trusting deep learning models becomes even more complicated without an in-depth understanding of the underlying neural network mechanics. 103 To assess whether the model learns important features aligned with physical principles, we use an approach that masks the bonds between the ligand and protein atoms. This approach is designed to highlight the pivotal role of each edge concerning biological activity.…”
Section: Concern In Scoring Powermentioning
confidence: 99%
“…Mohseni et al provide a survey and framework to evaluate XAI systems [175]. Practical XAI methods are expanded upon by Marcinkevičs and Vogt [16] and Notovich et al [176], providing application examples and a taxonomy of techniques. Domain-specific applications are explored by Preuer et al [177] in drug discovery and by Tjoa and Guan [30] in medical imaging.…”
Section: ) Data Preparation and Transformationmentioning
confidence: 99%
“…They overview a myriad of interpretability methods, tailoring them for direct application within random forest models. Exploring the practical paradigms of XAI, Marcinkevičs and Vogt [16] and Notovich et al [176] present a variety of XAI methods within machine learning, while cementing these with applicationspecific examples and providing a taxonomy of different XAI techniques, respectively. The comprehensive nature of these works bridges the gap between theory and real-world applications.…”
Section: A1 Related Surveysmentioning
confidence: 99%
See 1 more Smart Citation