2021
DOI: 10.48550/arxiv.2110.03611
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adversarial Retriever-Ranker for dense text retrieval

Abstract: Current dense text retrieval models face two typical challenges. First, it adopts a siamese dual-encoder architecture to encode query and document independently for fast indexing and searching, whereas neglecting the finer-grained term-wise interactions. This results in a sub-optimal recall performance. Second, it highly relies on a negative sampling technique to build up the negative documents in its contrastive loss. To address these challenges, we present Adversarial Retriever-Ranker (AR2), which consists o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(13 citation statements)
references
References 45 publications
0
12
0
Order By: Relevance
“…Therefore, using the knowledge learned by the ranker to guide the retriever is an effective approach. The Adversarial Retriever-Ranker(AR2) [34] model introduces the generative adversarial network into information retrieval, drawing inspiration from the IRGAN framework [31]. The dual-encoder retriever serves as the generator, retrieving hard negative samples to confuse the ranker.…”
Section: Combination Of Dual-encoder Andmentioning
confidence: 99%
“…Therefore, using the knowledge learned by the ranker to guide the retriever is an effective approach. The Adversarial Retriever-Ranker(AR2) [34] model introduces the generative adversarial network into information retrieval, drawing inspiration from the IRGAN framework [31]. The dual-encoder retriever serves as the generator, retrieving hard negative samples to confuse the ranker.…”
Section: Combination Of Dual-encoder Andmentioning
confidence: 99%
“…It also strongly outperforms a dimensionally-matched DPR, by 3% on MS-MARCO, and 1% on NQ in R@100, demonstrating DrBoost's ability to learn high-quality, compact embeddings. We also quote recent state-of-the-art results, which generally achieve stronger exact search results (AR2, Zhang et al, 2021). Our emphasis, however, is on comparing iteratively-sampled negatives to boosting, and we note that state-ofthe-art approaches generally use larger models and more complex training strategies than the "inner loop" BERT-base DPR we report here.…”
Section: Exact Retrievalmentioning
confidence: 98%
“…Fine-tuning Many attempts have explored to improve fine-tuning peformance, such as mining hard negatives (Xiong et al 2020;Zhan et al 2021), late interaction (Khattab and Zaharia 2020), distill knowledge from a strong teacher (Lin, Yang, and Lin 2021;Santhanam et al 2021), query clustering (Hofstätter et al 2021), data augmentation (Qu et al 2020) and jointly optimize retriever and re-ranker (Ren et al 2021b;Zhang et al 2022Zhang et al , 2021 .…”
Section: Related Workmentioning
confidence: 99%