The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
Proceedings of the 2022 ACM SIGIR International Conference on Theory of Information Retrieval 2022
DOI: 10.1145/3539813.3545133
|View full text |Cite
|
Sign up to set email alerts
|

On the Interpolation of Contextualized Term-based Ranking with BM25 for Query-by-Example Retrieval

Abstract: Term-based ranking with pre-trained transformer-based language models has recently gained attention as they bring the contextualization power of transformer models into the highly efficient term-based retrieval. In this work, we examine the generalizability of two of these deep contextualized term-based models in the context of query-by-example (QBE) retrieval in which a seed document acts as the query to find relevant documents. In this setting -where queries are much longer than common keyword queries -BERT … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
2
1

Relationship

3
5

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 22 publications
0
4
0
Order By: Relevance
“…It is noteworthy to mention that in this paper, we concentrate on analyzing the improvement by combining the first-stage retriever and a BERT-based re-ranker: BM25 and CE CAT respectively. However, we are aware that combining scores of BM25 and Dense Retrievers that both are first-stage retrievers has also shown improvements [70][71][72] that are outside the scope of our study. In particular, CLEAR [10] proposes an approach to train the dense retrievers to encode semantics that BM25 fails to capture for first stage retrieval.…”
Section: Methods For Combining Rankersmentioning
confidence: 89%
“…It is noteworthy to mention that in this paper, we concentrate on analyzing the improvement by combining the first-stage retriever and a BERT-based re-ranker: BM25 and CE CAT respectively. However, we are aware that combining scores of BM25 and Dense Retrievers that both are first-stage retrievers has also shown improvements [70][71][72] that are outside the scope of our study. In particular, CLEAR [10] proposes an approach to train the dense retrievers to encode semantics that BM25 fails to capture for first stage retrieval.…”
Section: Methods For Combining Rankersmentioning
confidence: 89%
“…It is noteworthy to mention that in this paper, we concentrate on analyzing the improvement by combining the first-stage retriever and a BERT-based re-ranker: BM25 and CE CAT respectively. However, we are aware that combining scores of BM25 and Dense Retrievers that both are first-stage retrievers has also shown improvements [39][40][41] that are outside the scope of our study. In particular, CLEAR [10] proposes an approach to train the dense retrievers to encode semantics that BM25 fails to capture for first stage retrieval.…”
Section: Methods For Combining Rankersmentioning
confidence: 89%
“…It is noteworthy to mention that in this paper, we concentrate on analyzing the improvement by combining the first-stage retriever and a BERT-based re-ranker: BM25 and CE CAT respectively. However, we are aware that combining scores of BM25 and Dense Retrievers that both are first-stage retrievers has also shown improvements [55,1,6] that are outside the scope of our study. In particular, CLEAR [20] proposes an approach to train the dense retrievers to encode semantics that BM25 fails to capture for first stage retrieval.…”
Section: Related Workmentioning
confidence: 89%