2021
DOI: 10.1007/s10791-021-09398-0
|View full text |Cite
|
Sign up to set email alerts
|

Neural ranking models for document retrieval

Abstract: Ranking models are the main components of information retrieval systems. Several approaches to ranking are based on traditional machine learning algorithms using a set of hand-crafted features. Recently, researchers have leveraged deep learning models in information retrieval. These models are trained end-to-end to extract features from the raw data for ranking tasks, so that they overcome the limitations of hand-crafted features. A variety of deep learning models have been proposed, and each model presents a … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 21 publications
(6 citation statements)
references
References 142 publications
0
2
0
Order By: Relevance
“…Also, it uses a large amount of web knowledge to generate expanded queries, which may lead to scalability issues when applied to largescale collections or real-time scenarios. 2021 Trabelsi et al [21] "Neural ranking models for document retrieval"…”
Section: Appendixmentioning
confidence: 99%
“…Also, it uses a large amount of web knowledge to generate expanded queries, which may lead to scalability issues when applied to largescale collections or real-time scenarios. 2021 Trabelsi et al [21] "Neural ranking models for document retrieval"…”
Section: Appendixmentioning
confidence: 99%
“…Deep contextualized language models, like BERT [16] and Ro-BERTa [25], have been recently proposed to solve multiple tasks [13,23,29,30,35,38,39,42,45,48,50]. Building on BERT, Chen et al [8] proposed a BERT-based ranking model to capture the matching signals between the query and the table fields using the sentence pair setting.…”
Section: Related Workmentioning
confidence: 99%
“…This task naturally invites neural information retrieval. 1 Large Language Models (LLM) have been demonstrated to understand language and can extract core concepts in text irrespective of the writing style. Furthermore, Graph Neural Networks (GNN) can encode relationships between data, making it a beneficial backend for recommendation systems.…”
Section: Introductionmentioning
confidence: 99%