Proceedings of the 24th International Conference on World Wide Web 2015
DOI: 10.1145/2736277.2741678
|View full text |Cite
|
Sign up to set email alerts
|

Collaborative Ranking with a Push at the Top

Abstract: The goal of collaborative filtering is to get accurate recommendations at the top of the list for a set of users. From such a perspective, collaborative ranking based formulations with suitable ranking loss functions are natural. While recent literature has explored the idea based on objective functions such as NDCG or Average Precision, such objectives are difficult to optimize directly. In this paper, building on recent advances from the learning to rank literature, we introduce a novel family of collaborati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
42
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 43 publications
(42 citation statements)
references
References 29 publications
0
42
0
Order By: Relevance
“…Hence, it is reasonable to devise a popularity-aware sampler to replace the uniform top-N preferred music tracks (with item side information) to each user. To speed up the experiments, we follow the common practice as in [6,31] by randomly sampling a subset of users from the user pool of the Yahoo dataset, and a subset of items from the item pool of the Last.fm dataset. The MLHt dataset is kept in its original form.…”
Section: Experimental Setup Datasets Evaluation Metrics and Baselinesmentioning
confidence: 99%
“…Hence, it is reasonable to devise a popularity-aware sampler to replace the uniform top-N preferred music tracks (with item side information) to each user. To speed up the experiments, we follow the common practice as in [6,31] by randomly sampling a subset of users from the user pool of the Yahoo dataset, and a subset of items from the item pool of the Last.fm dataset. The MLHt dataset is kept in its original form.…”
Section: Experimental Setup Datasets Evaluation Metrics and Baselinesmentioning
confidence: 99%
“…In order to speed up the experiments, we follow the common practice as in [5] by randomly sampling a subset of users from the Libimseti and Yahoo datasets, and a subset of items from the Lastfm dataset. Table 1 summarizes the statistics of the three datasets used in this work.…”
Section: Settingsmentioning
confidence: 99%
“…Since the frequency of a user listening to a song (i.e., relevance feedback) can be obtained from the Lastfm dataset, we also recommend songs by leveraging such information (denoted as FMF 11 ). Note that the frequency information has a large range compared with ratings (e.g., [1,5] interval). For example, a user may listen to a song in hundreds of times.…”
Section: Settingsmentioning
confidence: 99%
“…Interestingly, we observe that it is possible to compute the approximate rank of positive item i by sampling one incorrectlyranked item. More specifically, given a u, i pair, we repeatedly draw an item from I until we obtain an incorrectlyranked item j such thatŷ(x i )−ŷ(x j ) ≤ ε and P u ij =1, where ε is a positive margin value 5 . Let T denote the size of sampling trials before obtaining such an item.…”
mentioning
confidence: 99%
“…In this algorithm, we iterate through all positive items i ∈ Iu for each user and update the model parameters w, V until the procedure converges. In each iteration, given a user-item pair, the sampling process is first performed so as to estimate violating 5 We hypothesize that item j is ranked higher than i for user u only ifŷ(x i ) −ŷ(x j ) ≤ ε, the default value of ε in this paper is set to 1.…”
mentioning
confidence: 99%