2020
DOI: 10.1007/s10994-020-05901-8
|View full text |Cite
|
Sign up to set email alerts
|

A decision-theoretic approach for model interpretability in Bayesian framework

Abstract: A salient approach to interpretable machine learning is to restrict modeling to simple models. In the Bayesian framework, this can be pursued by restricting the model structure and prior to favor interpretable models. Fundamentally, however, interpretability is about users’ preferences, not the data generation mechanism; it is more natural to formulate interpretability as a utility function. In this work, we propose an interpretability utility, which explicates the trade-off between explanation fidelity and in… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 38 publications
0
6
0
Order By: Relevance
“…However as decision tree predictions are piecewise constant approximations, rather than continuous predictions, it is challenging to extrapolate them. 80 This indicates that a numerical value higher than the maximum value of a given output cannot be predicted. The output value beyond the maximum point loses any physical meaning.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…However as decision tree predictions are piecewise constant approximations, rather than continuous predictions, it is challenging to extrapolate them. 80 This indicates that a numerical value higher than the maximum value of a given output cannot be predicted. The output value beyond the maximum point loses any physical meaning.…”
Section: Resultsmentioning
confidence: 99%
“…This behavior indicates that EA should be optimized to be a value near 4.25 eV in practice. However as decision tree predictions are piecewise constant approximations, rather than continuous predictions, it is challenging to extrapolate them . This indicates that a numerical value higher than the maximum value of a given output cannot be predicted.…”
Section: Resultsmentioning
confidence: 99%
“…Nevertheless, we demonstrate empirically that our approach is able to match or exceed the performance of existing methods, with respect to the realism of the CEs generated. In future work, methodological developments could be explored by adapting the proposed method to work for black-box models (Afrabandpey et al, 2020).…”
Section: Discussionmentioning
confidence: 99%
“…Reference models have been used before for tasks other than variable selection, as in Afrabandpey et al (2020), where the authors constrain the projection of a complex neural network to be interpretable (e.g., projecting onto decision trees). Closer to variable selection and related to our approach, Piironen and Vehtari (2016) use projection predictive inference and impose further constraints on the projection of a GP reference model to perform variable selection, given the identifiability issue of the direct projection.…”
Section: Related Methodsmentioning
confidence: 99%