Proceedings of the 2019 Conference of the North 2019
DOI: 10.18653/v1/n19-1302
|View full text |Cite
|
Sign up to set email alerts
|

Repurposing Entailment for Multi-Hop Question Answering Tasks

Abstract: Question Answering (QA) naturally reduces to an entailment problem, namely, verifying whether some text entails the answer to a question. However, for multi-hop QA tasks, which require reasoning with multiple sentences, it remains unclear how best to utilize entailment models pre-trained on large scale datasets such as SNLI, which are based on sentence pairs. We introduceMultee, a general architecture that can effectively use entailment models for multi-hop QA tasks.Multee uses (i) a local module that helps lo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
55
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
2
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 43 publications
(55 citation statements)
references
References 22 publications
0
55
0
Order By: Relevance
“…The task of selecting justification sentences is complex for multi-hop QA, because of the additional knowledge aggregation requirement (examples of such questions and answers are shown in Figures 1 and 2). Although various neural QA methods have achieved high performance on some of these datasets Trivedi et al, 2019;Tymoshenko et al, 2017;Seo et al, 2016;Wang and Jiang, 2016;De Cao et al, 2018;Back et al, 2018), we argue that more effort must be dedicated to explaining their inference process.…”
Section: Introductionmentioning
confidence: 86%
See 2 more Smart Citations
“…The task of selecting justification sentences is complex for multi-hop QA, because of the additional knowledge aggregation requirement (examples of such questions and answers are shown in Figures 1 and 2). Although various neural QA methods have achieved high performance on some of these datasets Trivedi et al, 2019;Tymoshenko et al, 2017;Seo et al, 2016;Wang and Jiang, 2016;De Cao et al, 2018;Back et al, 2018), we argue that more effort must be dedicated to explaining their inference process.…”
Section: Introductionmentioning
confidence: 86%
“…In the first category, previous works (e.g., (Trivedi et al, 2019)) have used entailment resources including labeled trained datasets such as SNLI (Bowman et al, 2015) and MultiNLI (Williams et al, 2017) to train components for selecting justification sentences for QA. Other works have explicitly focused on training sentence selection components for QA models (Min et al, 2018;Wang et al, 2019).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…These large corpora have been used as part of larger benchmark sets, e.g., GLUE (Wang et al, 2018), and have proven useful for problems beyond NLI, such as sentence representation and transfer learning (Conneau et al, 2017;Subramanian et al, 2018;Reimers and Gurevych, 2019), automated question-answering (Khot et al, 2018;Trivedi et al, 2019) and model probing (Warstadt et al, 2019;Geiger et al, 2020;Jeretic et al, 2020).…”
Section: Related Workmentioning
confidence: 99%
“…The significant advances of NLI have led researchers in many fields to use this task to solve various problems and apply it to applications that require inference between two expressions. These include question answering 45 , fact extraction 46 , generating video captions 47 , and judging textual quality 48 and etc.…”
Section: Natural Language Inferencementioning
confidence: 99%