Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2021
DOI: 10.18653/v1/2021.naacl-main.363
|View full text |Cite
|
Sign up to set email alerts
|

If You Want to Go Far Go Together: Unsupervised Joint Candidate Evidence Retrieval for Multi-hop Question Answering

Abstract: Multi-hop reasoning requires aggregation and inference from multiple facts. To retrieve such facts, we propose a simple approach that retrieves and reranks set of evidence facts jointly. Our approach first generates unsupervised clusters of sentences as candidate evidence by accounting links between sentences and coverage with the given query. Then, a RoBERTa-based reranker is trained to bring the most representative evidence cluster to the top. We specifically emphasize on the importance of retrieving evidenc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 39 publications
(60 reference statements)
0
3
0
Order By: Relevance
“…The similarity scores between tokens are effective in removing invalid information while keeping valid keywords, nevertheless, ignoring the contextual representation of the question. Additionally, Yadav et al (2021) propose the JointRR uses the same method of similarity score in AIR as the filter but with a RoBERTa re-ranker, presenting a better performance of retrieval. The answer classifier in AIR and JointRR is the same RoBERTa.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The similarity scores between tokens are effective in removing invalid information while keeping valid keywords, nevertheless, ignoring the contextual representation of the question. Additionally, Yadav et al (2021) propose the JointRR uses the same method of similarity score in AIR as the filter but with a RoBERTa re-ranker, presenting a better performance of retrieval. The answer classifier in AIR and JointRR is the same RoBERTa.…”
Section: Related Workmentioning
confidence: 99%
“…Justifications order. Only supervised methods are associated with the order of justifications in RP (Asai et al 2020;Li et al 2021), while the unsupervised methods ignore it (Yadav et al 2020(Yadav et al , 2021. PathRetriever adaptively scores each RP in the graph constructed with the Wikipedia hyperlinks and document structures to model the relationships.…”
Section: Related Workmentioning
confidence: 99%
“…During training, heuristics are used to find the oracle query. • Yadav et al [171] retrieve k justification sentences using alignment technique similar to Yadav et al [170]. The question 𝑄, is concatenated with each retrieved justification 𝑞 𝑘 , and the token weights are assigned as: For each token 𝑡 in original question, if 𝑞 𝑘 contains 𝑡, weight for 𝑡 is 1, else it is 2.…”
Section: Retrievalmentioning
confidence: 99%
“…• Zhang et al [181] pass the node (document) representations to a binary classifier for predicting its relevance and keep the 𝑘 highest scoring documents. • Yadav et al [171] use RoBERTa [92] for reranking trained to predict the F-1 score of evidence chain. • Zhang et al [179] and 20 concatenate paragraphs with the question and feed it to a BERT followed by a binary classifier and keep N(=3) paragraphs with the highest scores.…”
Section: Final Retrieval (Re-ranking)mentioning
confidence: 99%
See 1 more Smart Citation