Proceedings of the 28th ACM International Conference on Information and Knowledge Management 2019
DOI: 10.1145/3357384.3357930
|View full text |Cite
|
Sign up to set email alerts
|

Candidate Generation with Binary Codes for Large-Scale Top-N Recommendation

Abstract: Generating the Top-N recommendations from a large corpus is computationally expensive to perform at scale. Candidate generation and re-ranking based approaches are often adopted in industrial settings to alleviate efficiency problems. However it remains to be fully studied how well such schemes approximate complete rankings (or how many candidates are required to achieve a good approximation), or to develop systematic approaches to generate high-quality candidates efficiently. In this paper, we seek to investi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
45
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
8
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 49 publications
(45 citation statements)
references
References 42 publications
0
45
0
Order By: Relevance
“…However, merely adopting the list-wise loss can have adverse effects on the ranking performance. Because a user is interested in only a few items among the numerous total items [10], learning the detailed ranking orders of all the unobserved items is not only daunting but also ineffective. The recommendation list from the teacher model contains information about a user's potential preference on each unobserved item; A few items that the user would be interested in (i.e., interesting items) are located near the top of the list, whereas the majority of items that the user would not be interested in (i.e., uninteresting items) are located far from the top.…”
Section: Relaxed Ranking Distillation (Rrd)mentioning
confidence: 99%
“…However, merely adopting the list-wise loss can have adverse effects on the ranking performance. Because a user is interested in only a few items among the numerous total items [10], learning the detailed ranking orders of all the unobserved items is not only daunting but also ineffective. The recommendation list from the teacher model contains information about a user's potential preference on each unobserved item; A few items that the user would be interested in (i.e., interesting items) are located near the top of the list, whereas the majority of items that the user would not be interested in (i.e., uninteresting items) are located far from the top.…”
Section: Relaxed Ranking Distillation (Rrd)mentioning
confidence: 99%
“…To prove the validity of the model and the better performance on the test set, the dimensions of the user and item representations in all methods are set to the same factors and all parameters are tuned. On Amazon datasets, the regularized tuning parameter is 20 in most cases, and BPR-MF [35], FPMC [36], VBPR [18], and CIGAR [37] perform best on area under curve (AUC). For each data set, we report the average AUC (representing all commodities) on the complete test set T and the subset of T, which contains only items with less than 5 positive feedback instances.…”
Section: Evaluation Methods and Discussionmentioning
confidence: 99%
“…As majority of items are irrelevant to the users, the two aforementioned steps are usually employed to achieve high efficiency in personalized recommendation (Covington et al, 2016;Kang & McAuley, 2019). On one hand, the candidate generation component relies on heuristics or retrievalefficient structures to recall a small subset of relevant items, to reduce the number of similarity evaluations in the ranking step (Guo et al, 2020).…”
Section: Candidate Generationmentioning
confidence: 99%