Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing 2018
DOI: 10.18653/v1/d18-1478
|View full text |Cite
|
Sign up to set email alerts
|

NPRF: A Neural Pseudo Relevance Feedback Framework for Ad-hoc Information Retrieval

Abstract: Pseudo relevance feedback (PRF) is commonly used to boost the performance of traditional information retrieval (IR) models by using top-ranked documents to identify and weight new query terms, thereby reducing the effect of query-document vocabulary mismatches. While neural retrieval models have recently demonstrated strong results for adhoc retrieval, combining them with PRF is not straightforward due to incompatibilities between existing PRF approaches and neural architectures. To bridge this gap, we propose… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
34
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 55 publications
(34 citation statements)
references
References 27 publications
0
34
0
Order By: Relevance
“…The traditional PRF model (i.e., RM3) and LTR models (i.e., RankSVM and LambdaMart) with human designed features are strong baselines whose performance is hard to beat for most neural ranking models based on raw texts. However, the PRF technique can also be leveraged to enhance neural ranking models (e.g., SNRM+PRF [28] and NPRF+DRMM [119] in Table 1), while human designed LTR features can be integrated into neural ranking models [33,31] to improve the ranking performance. Table 1: Overview of previously published results on ad hoc retrieval datasets.…”
Section: Empirical Comparison On Ad-hoc Retrievalmentioning
confidence: 99%
See 1 more Smart Citation
“…The traditional PRF model (i.e., RM3) and LTR models (i.e., RankSVM and LambdaMart) with human designed features are strong baselines whose performance is hard to beat for most neural ranking models based on raw texts. However, the PRF technique can also be leveraged to enhance neural ranking models (e.g., SNRM+PRF [28] and NPRF+DRMM [119] in Table 1), while human designed LTR features can be integrated into neural ranking models [33,31] to improve the ranking performance. Table 1: Overview of previously published results on ad hoc retrieval datasets.…”
Section: Empirical Comparison On Ad-hoc Retrievalmentioning
confidence: 99%
“…The citation in each row denotes the original paper where the method is proposed. The superscripts 1-6 denote that the results are cited from [21], [33], [34], [118], [28], [119], [84] 2. There seems to be a paradigm shift of the neural ranking model architectures from symmetric to asymmetric and from representation-focused to interaction-focused over time.…”
Section: Empirical Comparison On Ad-hoc Retrievalmentioning
confidence: 99%
“…Whereas most interaction-and representation-based approaches compute relevance at the document level, Fan et al [10] recently proposed a hierarchical neural matching model (HiNT) which employs a local matching layer and global decision layer, to capture relevance signals at the passage and document level which compete with each other. Another recent work has achieved state-of-the-art performance by creating a neural pseudo relevance feedback framework (NPRF) that can be used with existing neural IR models as building blocks [25].…”
Section: Related Workmentioning
confidence: 99%
“…Classic query expansion techniques expand the original query with terms selected from related documents; terms are usually added as a bag-of-words [1,9]. There is prior research that shows classic query expansion to be effective for a few neural ranking models where query terms and document terms are matched softly [10]. However, there is no prior work in exploration of new ways to add pseudo-relevance feedback to BERT-based rankers that rely on free-flowing natural language-text.…”
Section: Related Workmentioning
confidence: 99%