Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2021
DOI: 10.18653/v1/2021.naacl-main.104
|View full text |Cite
|
Sign up to set email alerts
|

Looking Beyond Sentence-Level Natural Language Inference for Question Answering and Text Summarization

Abstract: Natural Language Inference (NLI) has garnered significant attention in recent years; however, the promise of applying NLI breakthroughs to other downstream NLP tasks has remained unfulfilled. In this work, we use the multiple-choice reading comprehension (MCRC) and checking factual correctness of textual summarization (CFCS) tasks to investigate potential reasons for this. Our findings show that: (1) the relatively shorter length of premises in traditional NLI datasets is the primary challenge prohibiting usag… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
20
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 29 publications
(30 citation statements)
references
References 23 publications
1
20
0
Order By: Relevance
“…The NLI models appear to be complementary to the QA model, improving performance even on out-of-domain data. We also see that our our NQ-NLI+MNLI+QA outperforms Mishra et al (2021)+QA by a large margin. By inspecting the performance breakdown in Appendix C, we see the gap is mainly on SQuAD2.0 and SQuADadv.…”
Section: Results and Analysismentioning
confidence: 56%
See 4 more Smart Citations
“…The NLI models appear to be complementary to the QA model, improving performance even on out-of-domain data. We also see that our our NQ-NLI+MNLI+QA outperforms Mishra et al (2021)+QA by a large margin. By inspecting the performance breakdown in Appendix C, we see the gap is mainly on SQuAD2.0 and SQuADadv.…”
Section: Results and Analysismentioning
confidence: 56%
“…Moreover, the focus of other recent work in this space has been on transforming QA datasets into NLI datasets, which is a different end. Demszky et al (2018) and Mishra et al (2021) argue that QA datasets feature more diverse reasoning and can lead to stronger NLI models, particularly those better suited to strong contexts, but less attention has been paid to whether this agrees with classic definitions of entailment (Dagan et al, 2005) or short-context NLI settings (Williams et al, 2018).…”
Section: Background and Motivationmentioning
confidence: 99%
See 3 more Smart Citations