2016
DOI: 10.1145/2987380
|View full text |Cite
|
Sign up to set email alerts
|

Fast Ranking with Additive Ensembles of Oblivious and Non-Oblivious Regression Trees

Abstract: Learning-to-Rank models based on additive ensembles of regression trees have been proven to be very effective for scoring query results returned by large-scale Web search engines. Unfortunately, the computational cost of scoring thousands of candidate documents by traversing large ensembles of trees is high. Thus, several works have investigated solutions aimed at improving the efficiency of document scoring by exploiting advanced features of modern CPUs and memory hierarchies. In this article, we present Q … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
66
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 71 publications
(66 citation statements)
references
References 25 publications
0
66
0
Order By: Relevance
“…The QUICKSCORER (QS) algorithm [15], [16], which restructures the data layout and the processing of an ensemble of regression trees to leverage modern memory hierarchies and reduce branch prediction errors to limit control hazards, resulted up to 6.6x faster than VPRED.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…The QUICKSCORER (QS) algorithm [15], [16], which restructures the data layout and the processing of an ensemble of regression trees to leverage modern memory hierarchies and reduce branch prediction errors to limit control hazards, resulted up to 6.6x faster than VPRED.…”
Section: Related Workmentioning
confidence: 99%
“…Therefore, the complexity of QS in the worst case is linear time, i.e., Θ(Λ|T |) time. However, experiments on real datasets [16] show that the number of false nodes visited per tree is only a small fraction of Λ, and, admittedly, QS outperforms other algorithms with better worst-case time complexity.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…Our experiments are performed over three large publicly available LTR datasets [4,7,23]; these are retired validations sets published by large commercial search engines. Each dataset consists of queries, documents and relevance labels.…”
Section: Datasetsmentioning
confidence: 99%