Proceedings of the 30th ACM International Conference on Information &Amp; Knowledge Management 2021
DOI: 10.1145/3459637.3482063
|View full text |Cite
|
Sign up to set email alerts
|

BERT-QPP: Contextualized Pre-trained transformers for Query Performance Prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
24
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 24 publications
(25 citation statements)
references
References 26 publications
1
24
0
Order By: Relevance
“…Therefore, they do not necessarily agree on the predicted performance according to different queries, corpora, or retrieval methods. This observation has been made on different SOTA QPP methods and various well-known corpora such as TREC corpora or MS MARCO and their associated query set [8,16,34]. Thus, we conclude that the level of agreement could strengthen our confidence in the query performance prediction.…”
Section: Resultssupporting
confidence: 63%
“…Therefore, they do not necessarily agree on the predicted performance according to different queries, corpora, or retrieval methods. This observation has been made on different SOTA QPP methods and various well-known corpora such as TREC corpora or MS MARCO and their associated query set [8,16,34]. Thus, we conclude that the level of agreement could strengthen our confidence in the query performance prediction.…”
Section: Resultssupporting
confidence: 63%
“…Encoding query-document pairs. Following Arabzadeh et al [4], we first encode each query-document pair with BERT. As documents are frequently long enough to exceed BERT's 512 token limit, similar to Co-BERT [8], we split long texts into equal-sized passages.…”
Section: Methodsmentioning
confidence: 99%
“…With the recent development deep learning techniques, NeuralQPP [51] achieves promising results by training a three-components deep network under weak supervision of existing methods. Recently, while NQA-QPP [22] uses BERT to generate contextualized embedding for QPP in non-factoid question answering, BERT-QPP [4] directly applies BERT with pointwise learning in the prediction task, outperforming previous methods on the MS MARCO dev set [30] and TREC Deep Learning track query sets [13,14]. Groupwise Ranking.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations