2021
DOI: 10.48550/arxiv.2112.11407
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Toward Explainable AI for Regression Models

Abstract: In addition to the impressive predictive power of machine learning (ML) models, more recently, explanation methods have emerged that enable an interpretation of complex nonlinear learning models such as deep neural networks. Gaining a better understanding is especially important e.g. for safetycritical ML applications or medical diagnostics etc. While such Explainable AI (XAI) techniques have reached significant popularity for classifiers, so far little attention has been devoted to XAI for regression models (… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 58 publications
0
6
0
Order By: Relevance
“…A regression based XAI framework could thus accelerate the development of such techniques, because the reasons why the networks fail to generalize might be better understood for both specific local scale features such as where the Gulf Stream leaves the continental shelf and larger scale processes. In further work, we will benefit from the ongoing recent research developments in XAI for regression, for example, in Letzgus et al (2021), and aim to apply our methodology to this more challenging problem.…”
Section: Discussionmentioning
confidence: 99%
“…A regression based XAI framework could thus accelerate the development of such techniques, because the reasons why the networks fail to generalize might be better understood for both specific local scale features such as where the Gulf Stream leaves the continental shelf and larger scale processes. In further work, we will benefit from the ongoing recent research developments in XAI for regression, for example, in Letzgus et al (2021), and aim to apply our methodology to this more challenging problem.…”
Section: Discussionmentioning
confidence: 99%
“…The most popular explainability techniques were designed for classifiers, and special care needs to be taken when using them for regression, see [74]. Here we adopt the simple yet powerful strategy of [75], where explanations of deep regression models are obtained directly using methods for classification.…”
Section: Methodsmentioning
confidence: 99%
“…A wide range of machine learning (ML) approaches allows for explaining the chemistry of molecules, attributing which parts of the molecules are responsible for the chemical property of interest [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19] , and lessening the black box challenge of machine learning 20,21 . Typical explainable ML approaches that provide atomwise attribution include dummy atoms 22 , classification of atoms by chemical intuition 23 , regression models 24 , graph neural network (GNN) attributions [25][26][27][28] with gradients 29 , perturbations 30 , decompositions 31 , and surrogates 32 . In contrast, fragment-based explainability approaches generate importance values for groups of atoms or functional groups (subgraphs), e.g., Hammett equation 33 , matched molecular pairs [34][35][36] , molecular scaffolds 18,37 , and counterfactuals 38 .…”
Section: Background and Summarymentioning
confidence: 99%
“…Various metrics have also been proposed to quantify the accuracy of explanation values. Some metrics focus on characterizing post hoc explanations, such as Grad-Cam 48 , non-zero reference 24 , and the assumption of node explanation smoothness 60 . Other metrics focus on the accuracy of the explanations with respect to the ground truth 41,60 , which relies on the availability of ground-truth explanation datasets.…”
Section: Background and Summarymentioning
confidence: 99%