Proceedings of the 7th ACM Conference on Recommender Systems 2013
DOI: 10.1145/2507157.2507210
|View full text |Cite
|
Sign up to set email alerts
|

Learning to rank recommendations with the k-order statistic loss

Abstract: Making recommendations by learning to rank is becoming an increasingly studied area. Approaches that use stochastic gradient descent scale well to large collaborative filtering datasets, and it has been shown how to approximately optimize the mean rank, or more recently the top of the ranked list. In this work we present a family of loss functions, the korder statistic loss, that includes these previous approaches as special cases, and also derives new ones that we show to be useful. In particular, we present … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
40
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 48 publications
(41 citation statements)
references
References 8 publications
1
40
0
Order By: Relevance
“…Many other modifications to ranking loss functions have been proposed in the literature that interpolate between the the two first loss functions proposed above, or which prioritize correctly predicting the top ranked choices. These losses include the area under the curve loss [Ste07], ordered weighted average of pairwise classification losses [UBG09], the weighted approximate-rank pairwise loss [WBU10], the k-order statistic loss [WYW13], and the accuracy at the top loss [BCMR12].…”
Section: Examplesmentioning
confidence: 99%
“…Many other modifications to ranking loss functions have been proposed in the literature that interpolate between the the two first loss functions proposed above, or which prioritize correctly predicting the top ranked choices. These losses include the area under the curve loss [Ste07], ordered weighted average of pairwise classification losses [UBG09], the weighted approximate-rank pairwise loss [WBU10], the k-order statistic loss [WYW13], and the accuracy at the top loss [BCMR12].…”
Section: Examplesmentioning
confidence: 99%
“…This can be done by training a rating-based model with matrix completion to learn from observed user-item associations (either explicit or implicit feedback) to predict associations that are unobserved [1,5,12,13,15,16,18,20,21,26,36]. In addition to this rating-based approach, ranking-based methods have been proposed based on optimizing ranking loss; the ranking-based methods [21,25,[33][34][35] have been found more suitable for implicit feedback. However, many existing model-based CF algorithms leverage only the user-item associations available in a given user-item bipartite graph.…”
Section: Introductionmentioning
confidence: 99%
“…It is argued that this approach uses more informative ( u , i , j ) triples and avoids wasteful gradient computations resulting in faster convergence. A related approach is to sample j from a random subset of irrelevant items that have higher ranking scores than i with a margin. Yet, in another related approach, j is sampled from a random but fixed‐sized subset, Su, of irrelevant items for user u .…”
Section: Extensions To Pltr Algorithmsmentioning
confidence: 99%