2023
DOI: 10.1145/3600088
|View full text |Cite
|
Sign up to set email alerts
|

PARADE: Passage Representation Aggregation forDocument Reranking

Abstract: Pre-trained transformer models, such as BERT and T5, have shown to be highly effective at ad-hoc passage and document ranking. Due to the inherent sequence length limits of these models, they need to process document passages one at a time rather than processing the entire document sequence at once. Although several approaches for aggregating passage-level signals into a document-level relevance score have been proposed, there has yet to be an extensive comparison of these techniques. In this work, we explore … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 22 publications
(2 citation statements)
references
References 89 publications
0
2
0
Order By: Relevance
“…In this sense, it defines how pertinent a document is to a given topic. Thus, to compute the usefulness of retrieved documents, topic-document similarity models based on pre-trained language models, such as BERT-base [39], mono-BERT-large [40], and ELECTRA [41], could be used. Given a topicdocument pair, the language model infers a score that gives the level of similarity between the two input text passages.…”
Section: Multi-dimensional Rankingmentioning
confidence: 99%
“…In this sense, it defines how pertinent a document is to a given topic. Thus, to compute the usefulness of retrieved documents, topic-document similarity models based on pre-trained language models, such as BERT-base [39], mono-BERT-large [40], and ELECTRA [41], could be used. Given a topicdocument pair, the language model infers a score that gives the level of similarity between the two input text passages.…”
Section: Multi-dimensional Rankingmentioning
confidence: 99%
“…In this section, we present some state-of-the-art document retrieval models which are not based on knowledge bases. Li et al (2020) [26] proposed an approach named PARADE, which is a re-ranking model. They claimed that an improvement was achieved by the model on TREC Robust04 and GOV2 collections; the model achieves its most effective performance when it is adopted as a re-ranker, namely PARADE-Transformer.…”
Section: Non-entity-based Document Retrievalmentioning
confidence: 99%