2019
DOI: 10.1109/tpds.2018.2860982
|View full text |Cite
|
Sign up to set email alerts
|

Parallel Traversal of Large Ensembles of Decision Trees

Abstract: Machine-learnt models based on additive ensembles of regression trees are currently deemed the best solution to address complex classification, regression, and ranking tasks. The deployment of such models is computationally demanding: to compute the final prediction, the whole ensemble must be traversed by accumulating the contributions of all its trees. In particular, traversal cost impacts applications where the number of candidate items is large, the time budget available to apply the learnt model to them i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
4

Relationship

5
5

Authors

Journals

citations
Cited by 18 publications
(14 citation statements)
references
References 22 publications
(30 reference statements)
0
7
0
Order By: Relevance
“…While the efficiency of learning to rank solutions for document re-ranking have been extensively studied [41,91,197], computational efficiency concerns have largely be ignored by prior work in neural ranking, prompting some to call for more attention to this matter [68,98]. That being said, some efforts do exist.…”
Section: Background and Preliminariesmentioning
confidence: 99%
“…While the efficiency of learning to rank solutions for document re-ranking have been extensively studied [41,91,197], computational efficiency concerns have largely be ignored by prior work in neural ranking, prompting some to call for more attention to this matter [68,98]. That being said, some efforts do exist.…”
Section: Background and Preliminariesmentioning
confidence: 99%
“…the same query, while others combine linearly this score with the overall document relevance score [3,7,25,50,52]. More recently, learning-to-rank algorithms are deployed in many information retrieval systems [8,18,46] to rank documents.…”
Section: Related Workmentioning
confidence: 99%
“…The main reason of this success is the ability of deep neural networks to understand complex language patterns and learn to extract features from text that allow them to match queries and documents by abstracting from their lexical representation. In the same time frame, feature-based LtR methods reached maturity, and research on this area focused primarily on specific aspects such as efficiency [10,39,45,46,74,80], diversification [70], permutation-invariant models [62,64]. An investigated topic in feature-based LtR was also how to reduce the performance gap between neural and ensemblebased models [66].…”
Section: Introductionmentioning
confidence: 99%