2007
DOI: 10.1007/s10791-006-9019-z
|View full text |Cite
|
Sign up to set email alerts
|

Linear feature-based models for information retrieval

Abstract: Abstract. There have been a number of linear, feature-based models proposed by the information retrieval community recently. Although each model is presented differently, they all share a common underlying framework. In this paper we explore and discuss the theoretical issues of this framework, including a novel look at the parameter space. We then detail supervised training algorithms that directly maximize the evaluation metric under consideration, such as mean average precision. We present results that show… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
188
0

Year Published

2010
2010
2018
2018

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 307 publications
(188 citation statements)
references
References 29 publications
0
188
0
Order By: Relevance
“…6. COORDINATEASCENT (CA) (Metzler and Croft 2007) is a linear listwise model, where the scores of the query-document pairs are calculated as weighted combinations of the feature values. The weights are tuned by using a coordinate ascent optimization method, where the objective function is an arbitrary evaluation metric given by the user.…”
Section: Methods and Experimental Setupmentioning
confidence: 99%
“…6. COORDINATEASCENT (CA) (Metzler and Croft 2007) is a linear listwise model, where the scores of the query-document pairs are calculated as weighted combinations of the feature values. The weights are tuned by using a coordinate ascent optimization method, where the objective function is an arbitrary evaluation metric given by the user.…”
Section: Methods and Experimental Setupmentioning
confidence: 99%
“…We use this to leverage a semantic kernel function that uses relations between entities as a similarity measure (Section 3.6) and also study an alternative linear kernel. In addition, we use a list-wise learning algorithm, here an implementation from RankLib to directly optimize Mean-Average Precision (MAP) and alternatively Normalized Discounted Cumulative Gain (NDCG), thus addressing the so-called metric divergence problem [30]. We use coordinate ascent as an optimization algorithm, since it has demonstrated good performance on low-dimensional feature spaces with limited training data.…”
Section: Learning To Rank Entities For Web Queriesmentioning
confidence: 99%
“…Further details can be found in [1]. In our experiments, we consider four popular algorithms across three classes: MART [16] (point-wise), RankBoost [17] (pair-wise), Coordinate Ascent [2] and LambdaMART [18] (list-wise).…”
Section: Related Workmentioning
confidence: 99%
“…Most importantly, these approaches automatically learn the most effective combination of these features in the ranking function based on the available training data. As a result, learning to rank approaches have consistently outperformed the standard bag-of-words retrieval models [2] [3].…”
Section: Introductionmentioning
confidence: 99%