2021
DOI: 10.48550/arxiv.2103.15429
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Efficient Explanations from Empirical Explainers

Abstract: Amid a discussion about Green AI in which we see explainability neglected, we explore the possibility to efficiently approximate computationally expensive explainers. To this end, we propose the task of feature attribution modelling that we address with Empirical Explainers. Empirical Explainers learn from data to predict the attribution maps of expensive explainers. We train and test Empirical Explainers in the language domain and find that they model their expensive counterparts well, at a fraction of the co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 8 publications
0
1
0
Order By: Relevance
“…A central hub 1. would increase the comparability and replicability of explainability research, 2. would mitigate the computational burden, 3. would mitigate the implementational burden since in-depth expert knowledge of the explainers and models is required. Put differently, a central data hub containing a wide variety of feature attribution maps and offering easy access to them would (1) democratize explainability research to a certain degree, and (2) contribute to green NLP (Strubell et al, 2019) and green XAI (Schwarzenberg et al, 2021) by circumventing redundant computations.…”
Section: Introductionmentioning
confidence: 99%
“…A central hub 1. would increase the comparability and replicability of explainability research, 2. would mitigate the computational burden, 3. would mitigate the implementational burden since in-depth expert knowledge of the explainers and models is required. Put differently, a central data hub containing a wide variety of feature attribution maps and offering easy access to them would (1) democratize explainability research to a certain degree, and (2) contribute to green NLP (Strubell et al, 2019) and green XAI (Schwarzenberg et al, 2021) by circumventing redundant computations.…”
Section: Introductionmentioning
confidence: 99%