Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing 2022
DOI: 10.18653/v1/2022.emnlp-main.679
|View full text |Cite
|
Sign up to set email alerts
|

CONQRR: Conversational Query Rewriting for Retrieval with Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…Common methods involve selecting relevant tokens from the search session [23,35,43] and training a generative rewriter model using human-rewritten queries paired with their respective sessions [22,26,41,50]. Some research efforts incorporate reinforcement learning [5,48] or ranking signals [27,32] to align the generation process with the downstream search task. In contrast, CDR utilizes conversational search session data to perform end-to-end dense retrieval.…”
Section: Related Work 21 Conversational Searchmentioning
confidence: 99%
See 1 more Smart Citation
“…Common methods involve selecting relevant tokens from the search session [23,35,43] and training a generative rewriter model using human-rewritten queries paired with their respective sessions [22,26,41,50]. Some research efforts incorporate reinforcement learning [5,48] or ranking signals [27,32] to align the generation process with the downstream search task. In contrast, CDR utilizes conversational search session data to perform end-to-end dense retrieval.…”
Section: Related Work 21 Conversational Searchmentioning
confidence: 99%
“…This approach allows for the use of existing retrievers for the search process. However, it is challenging to directly optimize the rewriting towards search [21,27,32,48]. Another approach, known as conversational dense retrieval (CDR), focuses on training a conversational dense retriever to grasp the search intent by implicitly learning the latent representations of encoded queries and passages.…”
Section: Introductionmentioning
confidence: 99%