Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.771
|View full text |Cite
|
Sign up to set email alerts
|

NILE : Natural Language Inference with Faithful Natural Language Explanations

Abstract: The recent growth in the popularity and success of deep learning models on NLP classification tasks has accompanied the need for generating some form of natural language explanation of the predicted labels. Such generated natural language (NL) explanations are expected to be faithful, i.e., they should correlate well with the model's internal decision making. In this work, we focus on the task of natural language inference (NLI) and address the following question: can we build NLI systems which produce labels … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
100
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 88 publications
(110 citation statements)
references
References 22 publications
3
100
0
Order By: Relevance
“…This makes rationales intrinsic to the model, and tells the user what the prediction should be based on. Kumar and Talukdar (2020) highlight that this approach resembles post-hoc methods with the label and rationale being produced jointly (the end-toend predict-then-explain setting). Thus, all but the pipeline predict-then-explain approach are suitable extensions of our models.…”
Section: Discussion and Future Directionsmentioning
confidence: 99%
“…This makes rationales intrinsic to the model, and tells the user what the prediction should be based on. Kumar and Talukdar (2020) highlight that this approach resembles post-hoc methods with the label and rationale being produced jointly (the end-toend predict-then-explain setting). Thus, all but the pipeline predict-then-explain approach are suitable extensions of our models.…”
Section: Discussion and Future Directionsmentioning
confidence: 99%
“…In this section, we will briefly review related works on the sentiment classification [11,13], knowledge-aware sentiment analysis [9,14], and natural language explanation [5,16,23] classification. Sentiment Analysis Sentiment analysis and emotion recognition have always attracted attention in multiple fields such as NL processing, psychology, and cognitive science.…”
Section: Related Workmentioning
confidence: 99%
“…Structured Explanations: There is useful previous work on developing interpretable and explainable models (Doshi-Velez and Kim, 2017;Rudin, 2019;Hase and Bansal, 2020;Jacovi and Goldberg, 2020) for NLP. Explanations in NLP take three major forms -(1) extractive rationales or highlights (Zaidan et al, 2007;Lei et al, 2016;Yu et al, 2019;DeYoung et al, 2020) where a subset of the input text explain a prediction, (2) free-form or natural language explanations (Camburu et al, 2018;Rajani et al, 2019;Zhang et al, 2020;Kumar and Talukdar, 2020) that are not constrained to the input, and (3) structured explanations that range from semi-structured text (Ye et al, 2020) to chain of facts (Khot et al, 2020;Jhamtani and Clark, 2020;Gontier et al, 2020) to explanation graphs (based on edges between chains of facts) (Jansen et al, 2018;Jansen and Ustalov, 2019;Xie et al, 2020).…”
Section: Related Workmentioning
confidence: 99%