2023
DOI: 10.1021/acs.jctc.2c01235
|View full text |Cite
|
Sign up to set email alerts
|

A Perspective on Explanations of Molecular Prediction Models

Abstract: Chemists can be skeptical in using deep learning (DL) in decision making, due to the lack of interpretability in “black-box” models. Explainable artificial intelligence (XAI) is a branch of artificial intelligence (AI) which addresses this drawback by providing tools to interpret DL models and their predictions. We review the principles of XAI in the domain of chemistry and emerging methods for creating and evaluating explanations. Then, we focus on methods developed by our group and their applications in pred… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
29
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 27 publications
(31 citation statements)
references
References 139 publications
(344 reference statements)
0
29
0
Order By: Relevance
“…A range of XAI methods have been proposed in recent years , (Figure ) and are discussed in more detail in two recent comprehensive reviews. , Two simple strategies for XAI are to use self-explainable white-box methods and to check how changing inputs affect the outputs of black-box networks. Feature importance methods are a notable class of white-box methods that achieve explainability by identifying essential features based on model parameters such as weights and coefficients.…”
Section: Future Opportunitiesmentioning
confidence: 99%
“…A range of XAI methods have been proposed in recent years , (Figure ) and are discussed in more detail in two recent comprehensive reviews. , Two simple strategies for XAI are to use self-explainable white-box methods and to check how changing inputs affect the outputs of black-box networks. Feature importance methods are a notable class of white-box methods that achieve explainability by identifying essential features based on model parameters such as weights and coefficients.…”
Section: Future Opportunitiesmentioning
confidence: 99%
“…This can be achieved using explainable artificial intelligence (XAI) methods, which provide a window into the ML model’s decision-making process and correlations uncovered by the model through data analysis . Justification and interpretability offered by XAI methods not only provide evidence defending why a prediction is trustworthy with quantitative metrics but also refer to the degree of human understanding intrinsic within the model. ,, …”
Section: Introductionmentioning
confidence: 99%
“…In this way, we identify “Pareto-optimal” IDP sequences, meaning that sequence perturbations cannot enhance the dynamics further without reducing the stability of the condensate. Finally, we examine sequence features of Pareto-optimal sequences and perform a counterfactual analysis[35, 36] to identify the sequence determinants of the limiting thermodynamics–dynamics tradeoff. Taken together, our results demonstrate how sequence design can be used to tune thermodynamic and dynamical properties independently in the context of biomolecular condensates.…”
Section: Introductionmentioning
confidence: 99%