Proceedings of the Third ACM International Conference on Web Search and Data Mining 2010
DOI: 10.1145/1718487.1718538
|View full text |Cite
|
Sign up to set email alerts
|

Early exit optimizations for additive machine learned ranking systems

Abstract: Some commercial web search engines rely on sophisticated machine learning systems for ranking web documents. Due to very large collection sizes and tight constraints on query response times, online efficiency of these learning systems forms a bottleneck. An important problem in such systems is to speedup the ranking process without sacrificing much from the quality of results. In this paper, we propose optimization strategies that allow short-circuiting score computations in additive learning systems. The stra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
104
0

Year Published

2010
2010
2022
2022

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 102 publications
(105 citation statements)
references
References 26 publications
1
104
0
Order By: Relevance
“…We build ranking functions that satisfy each query's time requirement, whereas query evaluation strategies, such as early-termination techniques [7,2,28], simply improve the speed of existing ranking models, but do not provide an explicit mechanism for controlling the tradeoff between effectiveness and efficiency. The same critique applies to work on index pruning [8,23], where the goal is to improve query evaluation speed by reducing the size of the index.…”
Section: Related Workmentioning
confidence: 99%
“…We build ranking functions that satisfy each query's time requirement, whereas query evaluation strategies, such as early-termination techniques [7,2,28], simply improve the speed of existing ranking models, but do not provide an explicit mechanism for controlling the tradeoff between effectiveness and efficiency. The same critique applies to work on index pruning [8,23], where the goal is to improve query evaluation speed by reducing the size of the index.…”
Section: Related Workmentioning
confidence: 99%
“…Examples of this search architecture abound in industry [7] and academia [8,6]. In fact, the TREC 2013 Microblog evaluation is exactly set up along these lines: participants do not have access to the raw collection-instead, they must complete the task via a search API that returns candidate results.…”
Section: Discussionmentioning
confidence: 99%
“…Their applicability in practice is mainly limited by the fact that they have to evaluate many rankers at test time, and it is well known that the evaluation time is crucial in a real world learning-to-rank application. This motivates the development of a framework in which a controller can select the rankers to be evaluated based on the characteristics of individual queries (Cambazoglu et al 2010). Our future goal here is to model the problem as a Markov decision process, and solve it using standard reinforcement learning techniques (Dulac-Arnold et al 2011;Benbouzid et al 2011Benbouzid et al , 2012a.…”
Section: Discussionmentioning
confidence: 99%