2022
DOI: 10.26434/chemrxiv-2022-qfv02
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Perspective on Explanations of Molecular Prediction Models

Abstract: Chemists can be skeptical in using deep learning (DL) in decision making, due to the lack of interpretability in "black-box" models. Explainable artificial intelligence (XAI) is a branch of AI which addresses this drawback by providing tools to interpret DL models and their predictions. We review the principles of XAI in the domain of chemistry and emerging methods for creating and evaluating explanations. Then we focus methods developed by our group and their application to predicting solubility, blood-brain … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
19
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 14 publications
(19 citation statements)
references
References 95 publications
0
19
0
Order By: Relevance
“…A range of XAI methods have been proposed in recent years 313,314 (Figure 4) and are discussed in more detail in two recent comprehensive reviews. 313,315 Two simple strategies for XAI are to use self-explainable white-box methods and to check how changing inputs affect the outputs of black-box networks.…”
Section: Future Opportunitiesmentioning
confidence: 99%
“…A range of XAI methods have been proposed in recent years 313,314 (Figure 4) and are discussed in more detail in two recent comprehensive reviews. 313,315 Two simple strategies for XAI are to use self-explainable white-box methods and to check how changing inputs affect the outputs of black-box networks.…”
Section: Future Opportunitiesmentioning
confidence: 99%
“…30 Justification and interpretability offered by XAI methods not only provide evidence defending why a prediction is trustworthy with quantitative metrics but also refer to the degree of human understanding intrinsic within the model. 10,31,32 Numerous techniques are available to incorporate explainability in GNN and DNN models. 33,34 Our emphasis in this work lies in using a method known as attribution.…”
Section: ■ Introductionmentioning
confidence: 99%
“…In this way, we identify "Pareto-optimal" IDP sequences, meaning that sequence perturbations cannot enhance the dynamics further without reducing the stability of the condensate. Finally, we examine sequence features of Pareto-optimal sequences and perform a counterfactual analysis [35,36] to identify the sequence determinants of the limiting thermodynamics-dynamics tradeoff. Taken together, our results demonstrate how sequence design can be used to tune thermodynamic and dynamical properties independently in the context of biomolecular condensates.…”
Section: Introductionmentioning
confidence: 99%