Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing 2018
DOI: 10.18653/v1/d18-1407
|View full text |Cite
|
Sign up to set email alerts
|

Pathologies of Neural Models Make Interpretations Difficult

Abstract: arXiv:1804.07781v3 [cs.CL]

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
97
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 175 publications
(98 citation statements)
references
References 25 publications
(34 reference statements)
1
97
0
Order By: Relevance
“…We consider the Slug system , a seq2seq-based ensemble system, as the overall winner of this challenge. It received high human ratings for both 30 Note that this problem appears to be more general since it has also been reported in related fields, including image captioning Rohrbach et al (2018), machine translation Koehn and Knowles (2017); Lee et al (2019), and question answering Feng et al (2018). naturalness and quality, as well as for automatic word-overlap metrics.…”
Section: Winning Systemmentioning
confidence: 74%
“…We consider the Slug system , a seq2seq-based ensemble system, as the overall winner of this challenge. It received high human ratings for both 30 Note that this problem appears to be more general since it has also been reported in related fields, including image captioning Rohrbach et al (2018), machine translation Koehn and Knowles (2017); Lee et al (2019), and question answering Feng et al (2018). naturalness and quality, as well as for automatic word-overlap metrics.…”
Section: Winning Systemmentioning
confidence: 74%
“…Feng et al [ 159 ] introduced a process, called “input reduction”, which can expose issues regarding overconfidence and oversensitivity in natural language processing models. Under input reduction, non-important words are removed from the input text in interative fashion, while the model’s prediction for that input remains unchanged.…”
Section: Different Scopes Of Machine Learning Interpretability: a mentioning
confidence: 99%
“…One shortcoming of neural machine translation (NMT), and neural models in general, is that it is often difficult for humans to comprehend the reasons why the model is making predictions (Feng et al, 2018;Ghorbani et al, 2019). The main cause of such a difficulty is that in neural models, information is implicitly represented by realvalued vectors, and conceptual interpretation of these vectors remains a challenge.…”
Section: Introductionmentioning
confidence: 99%