Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1260
|View full text |Cite
|
Sign up to set email alerts
|

Quick and (not so) Dirty: Unsupervised Selection of Justification Sentences for Multi-hop Question Answering

Abstract: We propose an unsupervised strategy for the selection of justification sentences for multihop question answering (QA) that (a) maximizes the relevance of the selected sentences, (b) minimizes the overlap between the selected facts, and (c) maximizes the coverage of both question and answer. This unsupervised sentence selection method can be coupled with any supervised QA approach. We show that the sentences selected by our method improve the performance of a state-of-the-art supervised QA model on two multi-ho… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

3
27
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 35 publications
(32 citation statements)
references
References 71 publications
3
27
0
Order By: Relevance
“…Our work falls under the revitalized direction that focuses on the interpretability of QA systems, where the machine's inference process is explained to the end user in natural language evidence text (Qi et al, 2019;Yang et al, 2018;Wang et al, 2019b;Yadav et al, 2019b;Bauer et al, 2018). Several…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Our work falls under the revitalized direction that focuses on the interpretability of QA systems, where the machine's inference process is explained to the end user in natural language evidence text (Qi et al, 2019;Yang et al, 2018;Wang et al, 2019b;Yadav et al, 2019b;Bauer et al, 2018). Several…”
Section: Related Workmentioning
confidence: 99%
“…As shown by (Khot et al, 2019a;Qi et al, 2019), and our table 2, these techniques do not work well for complex multi-hop questions, which require knowledge aggregation from multiple related justifications. Some unsupervised methods extract groups of justification sentences Yadav et al, 2019b) but these methods are exponentially expensive in the retrieval step. Contrary to all of these, AIR proposes a simpler and more efficient method for chaining justification sentences.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Iida et al (2019) and Nakatsuji and Okui (2020) incorporate some background knowledge into Seq2Seq model for why questions and conclusion-centric questions. Some latest works (Feldman and El-Yaniv, 2019;Yadav et al, 2019;Nishida et al, 2019a) attempt to provide evidence or justifications for humanunderstandable explanation of the multi-hop inference process in factoid QA, where the inferred evidences are only treated as the middle steps for finding the answer. However, in non-factoid QA, the intermediate output is also important to form a complete answer, which requires a bridge between the multi-hop inference and summarization.…”
Section: Related Workmentioning
confidence: 99%
“…The emergence of large pretrained language models (LM) (Devlin et al, 2019;Liu et al, 2019) yielded significant progress in question answering (QA), including complex QA tasks that require multihop reasoning (Banerjee et al, 2019;Asai et al, 2019;Yadav et al, 2019). Most of these stateof-the-art (SOTA) approaches address multi-hop reasoning tasks in a discriminative manner: they take the question, the candidate answer, and all the context available as the input, and produce a single score indicating the likelihood of the answer as justified by the provided context (an example is shown in Figure 1).…”
Section: Introductionmentioning
confidence: 99%