The platform will undergo maintenance on Sep 14 at about 9:30 AM EST and will be unavailable for approximately 1 hour.
2020
DOI: 10.48550/arxiv.2005.01218
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Unsupervised Alignment-based Iterative Evidence Retrieval for Multi-hop Question Answering

Abstract: Evidence retrieval is a critical stage of question answering (QA), necessary not only to improve performance, but also to explain the decisions of the corresponding QA method. We introduce a simple, fast, and unsupervised iterative evidence retrieval method, which relies on three ideas: (a) an unsupervised alignment approach to soft-align questions and answers with justification sentences using only GloVe embeddings, (b) an iterative process that reformulates queries focusing on terms that are not covered by e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 22 publications
(52 reference statements)
0
3
0
Order By: Relevance
“…As noted by previous works (Yadav et al, 2019(Yadav et al, , 2020), the answer selection performance can decrease when increasing the number of used facts k for Transformers, underlining that knowledge aggregation with increasing distracting knowledge remains an open research problem. We evaluate how our approach stacks compared with transformer-based approaches in this aspect, presented in Figure 3.…”
Section: Knowledge Aggregation With Increasing Distractorsmentioning
confidence: 83%
“…As noted by previous works (Yadav et al, 2019(Yadav et al, , 2020), the answer selection performance can decrease when increasing the number of used facts k for Transformers, underlining that knowledge aggregation with increasing distracting knowledge remains an open research problem. We evaluate how our approach stacks compared with transformer-based approaches in this aspect, presented in Figure 3.…”
Section: Knowledge Aggregation With Increasing Distractorsmentioning
confidence: 83%
“…Pre-trained embeddings with heuristics: Pre-trained embeddings have the advantage of capturing semantic similarity, going beyond the lexical overlaps limitation imposed by the use of weighting schemes. This property has been shown to be useful for multi-hop and abstractive tasks, where approaches based on pre-trained word embeddings, such as GloVe (Pennington et al, 2014), have been adopted to perform semantic alignment between question, answer and justification sentences (Yadav et al, 2020). Silva et al (2018; employ word embeddings and semantic similarity scores to perform selective reasoning on commonsense knowledge graphs and construct explanations for textual entailment.…”
Section: Explicit Modelsmentioning
confidence: 99%
“…Banerjee and Bara (2020) propose a semantic ranking model based on BERT for QASC (Khot et al, 2020) and OpenBookQA . Transformers have shown improved performance on downstream answer prediction tasks when applied in combination with explanations constructed through explicit models (Yadav et al, 2019;Yadav et al, 2020;Valentino et al, 2020a).…”
Section: Latent Modelsmentioning
confidence: 99%