2020
DOI: 10.48550/arxiv.2012.00893
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Evaluating Explanations: How much do explanations from the teacher aid students?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
37
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
4

Relationship

2
6

Authors

Journals

citations
Cited by 17 publications
(37 citation statements)
references
References 0 publications
0
37
0
Order By: Relevance
“…To improve the model's learning, explanations can be used in a diverse range of ways, including as extra supervision or regularization (Pruthi et al, 2020;Hase et al, 2020;Narang et al, 2020;Andreas et al, 2017), pruned inputs (Jain et al, 2020;Bastings et al, 2019;Lei et al, 2016), additional inputs (Hase and Bansal, 2021;Co-Reyes et al, 2018), and intermediate variables Zhou et al, 2020;. The most similar work to ours is Pruthi et al (2020), which proposed using extractive text explanations to regularize a PLM's self-attention mechanism and demonstrated considerable performance gains. Still, methods for learning from explanations have largely focused on domains like text and images, as opposed to graphs.…”
Section: Learning From Model Explanationsmentioning
confidence: 84%
See 1 more Smart Citation
“…To improve the model's learning, explanations can be used in a diverse range of ways, including as extra supervision or regularization (Pruthi et al, 2020;Hase et al, 2020;Narang et al, 2020;Andreas et al, 2017), pruned inputs (Jain et al, 2020;Bastings et al, 2019;Lei et al, 2016), additional inputs (Hase and Bansal, 2021;Co-Reyes et al, 2018), and intermediate variables Zhou et al, 2020;. The most similar work to ours is Pruthi et al (2020), which proposed using extractive text explanations to regularize a PLM's self-attention mechanism and demonstrated considerable performance gains. Still, methods for learning from explanations have largely focused on domains like text and images, as opposed to graphs.…”
Section: Learning From Model Explanationsmentioning
confidence: 84%
“…However, it is not necessarily obvious how these KG explanations should be used, especially for improving the model. Recently, Pruthi et al (2020) proposed using text saliency explanations to regularize a PLM's self-attention mechanism, then evaluating explanations by their ability to improve the PLM's performance. Inspired by this idea, we view KG explanations as rich signals for teaching KG-augmented models how to filter out task-irrelevant KG information.…”
Section: Introductionmentioning
confidence: 99%
“…A variety of automated metrics to measure explanation quality have been proposed in the past. However, many of them can be easily gamed (Hooker et al 2019;Treviso and Martins 2020b;Hase et al 2020a) (see (Pruthi et al 2020) for a detailed discussion on this point). A popular way to evaluate explanations is to compare the produced explanations with expert-collected rationales (Mullenbach et al 2018;DeYoung et al 2020).…”
Section: Related Workmentioning
confidence: 99%
“…Chandrasekaran et al (2018) present model explanations during the testing phase, whereas do not include explanations at test time, as explanations could "leak" model output (seePruthi et al (2020);Jacovi and Goldberg (2020). ).…”
mentioning
confidence: 99%
“…Explainers are compared with count-based metrics (Poerner et al, 2018;De Cao et al, 2020;Tsang et al, 2020;Nguyen and Martínez, 2020;Bodria et al, 2021;Ding and Koehn, 2021;Yin et al, 2021;Hase et al, 2021;Kokhlikyan et al, 2021;Zafar et al, 2021;Sinha et al, 2021) and against human judgement (Nguyen, 2018;Lertvittayakumjorn and Toni, 2019;Hase and Bansal, 2020;Prasad et al, 2020). Feature attribution scores have also been incorporated into model training (Ross et al, 2017;Liu and Avci, 2019;Erion et al, 2021;Pruthi et al, 2020).…”
Section: Introductionmentioning
confidence: 99%