2018 IEEE International Conference on Big Data (Big Data) 2018
DOI: 10.1109/bigdata.2018.8621994
|View full text |Cite
|
Sign up to set email alerts
|

Top-N-Rank: A Scalable List-wise Ranking Method for Recommender Systems

Abstract: We propose Top-N-Rank, a novel family of listwise Learning-to-Rank models for reliably recommending the N top-ranked items. The proposed models optimize a variant of the widely used discounted cumulative gain (DCG) objective function which differs from DCG in two important aspects: (i) It limits the evaluation of DCG only on the top N items in the ranked lists, thereby eliminating the impact of low-ranked items on the learned ranking function; and (ii) it incorporates weights that allow the model to leverage m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
15
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(16 citation statements)
references
References 21 publications
1
15
0
Order By: Relevance
“…The weight Φ ui introduced in [5], [13] enables pairwise objectives to emphasize the loss of positive items at lower ranks; the value of Φ ui is chosen to be larger if the approximated rank is lower for a positive item i. Some choices of Φ ui are known to make the optimizing hinge loss closely related to maximizing discounted cumulative gain [18], [19].…”
Section: A Objectives For Recommendation Modelsmentioning
confidence: 99%
“…The weight Φ ui introduced in [5], [13] enables pairwise objectives to emphasize the loss of positive items at lower ranks; the value of Φ ui is chosen to be larger if the approximated rank is lower for a positive item i. Some choices of Φ ui are known to make the optimizing hinge loss closely related to maximizing discounted cumulative gain [18], [19].…”
Section: A Objectives For Recommendation Modelsmentioning
confidence: 99%
“…This method has been widely applied for optimizing [25], [39] and [40]. Rather than optimizing the whole list and taking items at the bottom into account, Liang et al [25] proposed Top-N-Rank, which focuses on the top ranked items and uses a listwise loss with a cutoff to directly optimize for @ . Despite this rich track record of attempts to learn a ranking by metric optimization, still insufficient is known about what metric to optimize for in order to obtain the best performance according to some evaluation metric.…”
Section: Related Workmentioning
confidence: 99%
“…Another popular method is to use a smooth function, such as a sigmoid or ReLU [32], to approximate the non-smooth indicator function. This method has been widely applied for optimizing 𝐷𝐶𝐺 [25], 𝐴𝑃 [39] and 𝑅𝑅 [40]. Rather than optimizing the whole list and taking items at the bottom into account, Liang et al [25] proposed Top-N-Rank, which focuses on the top ranked items and uses a listwise loss with a cutoff to directly optimize for 𝐷𝐶𝐺@𝑘.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations