2021
DOI: 10.48550/arxiv.2101.07382
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Comparison of Question Rewriting Methods for Conversational Passage Retrieval

Abstract: Conversational passage retrieval relies on question rewriting to modify the original question so that it no longer depends on the conversation history. Several methods for question rewriting have recently been proposed, but they were compared under different retrieval pipelines. We bridge this gap by thoroughly evaluating those question rewriting methods on the TREC CAsT 2019 and 2020 datasets under the same retrieval pipeline. We analyze the effect of different types of question rewriting methods on retrieval… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 7 publications
(20 reference statements)
0
2
0
Order By: Relevance
“…The main models for QR are either generative (Vakulenko et al, 2021a;Yu et al, 2020) or extractive one (Voskarides et al, 2020) -i.e., the relevant to-kens in the context are appended to the question. When a single model is used for both retriever and reader, generative model overperform extractive ones (Vakulenko et al, 2021b); however, mixing the two approaches further improves the performance (Del Tredici et al, 2021). Our work is related to (Voskarides et al, 2020), as we also aim at extracting the relevant contextual information.…”
Section: Related Workmentioning
confidence: 99%
“…The main models for QR are either generative (Vakulenko et al, 2021a;Yu et al, 2020) or extractive one (Voskarides et al, 2020) -i.e., the relevant to-kens in the context are appended to the question. When a single model is used for both retriever and reader, generative model overperform extractive ones (Vakulenko et al, 2021b); however, mixing the two approaches further improves the performance (Del Tredici et al, 2021). Our work is related to (Voskarides et al, 2020), as we also aim at extracting the relevant contextual information.…”
Section: Related Workmentioning
confidence: 99%
“…Therefore, most conversational retrieval approaches so far introduce a query rewriting step, which essentially decomposes the conversational search problem into a query resolution problem and an ad-hoc retrieval problem. Regarding query resolution, the majority of methods perform an explicit query re-write attempting to place the user's question in the context of the conversation, by either expanding queries with terms from recent history [27], or rewriting the full question using a sequenceto-sequence model [12,16,18,25,30]. Yu et al [31] learns to better encode the user's question in a latent space so that the learnt embeddings are close to human rewritten questions, while Lin et al [17] uses human rewritten questions to generate large-scale pseudorelevance labels and bring the user's question embeddings closer to the pseudo-relevant passage embeddings.…”
mentioning
confidence: 99%