2011
DOI: 10.1587/transinf.e94.d.1854
|View full text |Cite
|
Sign up to set email alerts
|

A Short Introduction to Learning to Rank

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
137
0
4

Year Published

2014
2014
2021
2021

Publication Types

Select...
3
3
3

Relationship

1
8

Authors

Journals

citations
Cited by 251 publications
(141 citation statements)
references
References 16 publications
0
137
0
4
Order By: Relevance
“…4). In two-dimensional space, f (s) is proved to be increasing along each coordinate if control points are restricted in the interior of the hypercube [0, 1] d [14]. Thus, a proposition can be deduced by Lemma 1.…”
Section: Rpc Formulation With Bézier Curvesmentioning
confidence: 98%
“…4). In two-dimensional space, f (s) is proved to be increasing along each coordinate if control points are restricted in the interior of the hypercube [0, 1] d [14]. Thus, a proposition can be deduced by Lemma 1.…”
Section: Rpc Formulation With Bézier Curvesmentioning
confidence: 98%
“…There are generally three formulations (Li, 2011): pointwise ranking, pairwise ranking, and listwise ranking. The goal is to learn a ranking function f (w, tp i ) → y i where tp i denotes a text pair <s1,s2>.…”
Section: Learning To Rank Semantic Coherencementioning
confidence: 99%
“…This approach typically views the entire ranked list of documents as a learning instance while optimizing some objective function defined over all of the documents, such as, normalized discounted cumulative gain (NDCG) [2]. We refer the readers who are interested in more details of LTR to [4,17,14] for more comprehensive reviews.…”
Section: Learning To Rankmentioning
confidence: 99%
“…Coordinate Ascent algorithm has also been shown to be effective for learning linear ranking functions in some other search domains [19]. One key benefit of listwise learning-to-rank approach over pointwise and pairwise ones is that the listwise approach can optimize ranking-based metrics directly [14,17]. The objective function we optimize in the learning process is normalized discounted cumulative gain (NDCG@K) defined on the graded relevance labels as described above.…”
Section: Existing Features and Learning Algorithmmentioning
confidence: 99%