2019
DOI: 10.1145/3291756
|View full text |Cite
|
Sign up to set email alerts
|

Top-N Recommendation with Multi-Channel Positive Feedback using Factorization Machines

Abstract: User interactions can be considered to constitute different feedback channels, for example, view, click, like or follow, that provide implicit information on users’ preferences. Each implicit feedback channel typically carries a unary, positive-only signal that can be exploited by collaborative filtering models to generate lists of personalized recommendations. This article investigates how a learning-to-rank recommender system can best take advantage of implicit feedback signals from multiple channels. We foc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(7 citation statements)
references
References 31 publications
(85 reference statements)
0
7
0
Order By: Relevance
“…It is worth mentioning that the observation in experiments implies that the combination of an interacted item i and an un-interacted item j is a better candidate pair for model training, which means that an un-interacted item has a higher probability to be a negative item. Consequently, the extended method of MF-BPR [Loni et al 2019] improves the sampling strategy, in which negative items are only picked up from the un-interacted items, meaning P − is always the un-interacted level. The authors introduce three different ways for sampling negative items, where the first one is the uniform sampler like the vanilla BPR, the second one over-samples the popular items due to the popularity skewness problem, and the last one is a multi-channel sampler considering both the popularity of the items and the level of the feedback.…”
Section: Methods In Hoccfmentioning
confidence: 99%
See 3 more Smart Citations
“…It is worth mentioning that the observation in experiments implies that the combination of an interacted item i and an un-interacted item j is a better candidate pair for model training, which means that an un-interacted item has a higher probability to be a negative item. Consequently, the extended method of MF-BPR [Loni et al 2019] improves the sampling strategy, in which negative items are only picked up from the un-interacted items, meaning P − is always the un-interacted level. The authors introduce three different ways for sampling negative items, where the first one is the uniform sampler like the vanilla BPR, the second one over-samples the popular items due to the popularity skewness problem, and the last one is a multi-channel sampler considering both the popularity of the items and the level of the feedback.…”
Section: Methods In Hoccfmentioning
confidence: 99%
“…Before diving into the details of the comparisons, we first make a brief introduction of the evaluation protocols and metrics in the reviewed works. As for the evaluation protocols, hold-out [Guo et al 2017a;, k-fold cross validation [Loni et al 2019;Qiu et al 2018], and leave one out [Chen et al 2020;Zhou et al 2019] are three commonly used protocols. As for the evaluation metrics, they are similar to those in most works on information retrieval and recommender systems, including hit ratio, precision, recall, F1, NDCG, 1-call, mean reciprocal rank (MRR), mean average precision (MAP), and area under the curve (AUC) [Ricci et al 2015].…”
Section: Empirical Comparisons Of Different Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Another group of studies exploits tensor factorization techniques for modeling useritem-context relations [43,44]. Recently, factorization machines [42,45,46] and deep learning [47,48] based on CARS become increasingly popular, which directly model nonlinear interactions between features. Some studies also use representation learning techniques, e.g., CARS 2 [49] and COT [37], which provide not only a latent vector but also context-aware representations.…”
Section: Related Workmentioning
confidence: 99%