Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2018
DOI: 10.18653/v1/p18-1032
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating neural network explanation methods using hybrid documents and morphosyntactic agreement

Abstract: The behavior of deep neural networks (DNNs) is hard to understand. This makes it necessary to explore post hoc explanation methods. We conduct the first comprehensive evaluation of explanation methods for NLP. To this end, we design two novel evaluation paradigms that cover two important classes of NLP problems: small context and large context problems. Both paradigms require no manual annotation and are therefore broadly applicable. We also introduce LIMSSE, an explanation method inspired by LIME that is desi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
71
0

Year Published

2019
2019
2019
2019

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 75 publications
(75 citation statements)
references
References 22 publications
(21 reference statements)
2
71
0
Order By: Relevance
“…An alternative default strategy in computer vision is to uniformly employ the LRP-α 1 β 0 in every hidden layer [53], the latter has the advantage of having no free parameter, and delivers positive explanations. On convolutional neural networks for text, LRP-with a small value was found to work well [3,57], it provides a signed explanation.…”
Section: Lrp In Deep Neuralmentioning
confidence: 85%
See 2 more Smart Citations
“…An alternative default strategy in computer vision is to uniformly employ the LRP-α 1 β 0 in every hidden layer [53], the latter has the advantage of having no free parameter, and delivers positive explanations. On convolutional neural networks for text, LRP-with a small value was found to work well [3,57], it provides a signed explanation.…”
Section: Lrp In Deep Neuralmentioning
confidence: 85%
“…For removing a word we simply discard it from the input sequence and concatenate the remaining parts of the sentence. An alternative removal scheme would have been to set the corresponding word embedding to zero in the input (which in practice gave us similar results), however the former enables us to generate more natural texts, although we acknowledge that the resulting sentence might be partly syntactically broken as pointed out by Poerner et al [57].…”
Section: Validating Explanations On Standard Lstms: Selectivity and Fmentioning
confidence: 99%
See 1 more Smart Citation
“…This section presents our automatic evaluation approach, which is an extension of the hybrid document paradigm (Poerner et al, 2018). The major advantage of automatic evaluation in the context of explanation methods is that it does not require manual annotation.…”
Section: Automatic Evaluation Using Fake Factsmentioning
confidence: 99%
“…The major advantage of automatic evaluation in the context of explanation methods is that it does not require manual annotation. Poerner et al (2018) create hybrid documents by randomly concatenating fragments of different documents. We adapt this paradigm to our use case in the following way:…”
Section: Automatic Evaluation Using Fake Factsmentioning
confidence: 99%