2024
DOI: 10.1145/3627159
|View full text |Cite
|
Sign up to set email alerts
|

Robust Collaborative Filtering to Popularity Distribution Shift

An Zhang,
Wenchang Ma,
Jingnan Zheng
et al.

Abstract: In leading collaborative filtering (CF) models, representations of users and items are prone to learn popularity bias in the training data as shortcuts. The popularity shortcut tricks are good for in-distribution (ID) performance but poorly generalized to out-of-distribution (OOD) data, i.e., when popularity distribution of test data shifts w.r.t. the training one. To close the gap, debiasing strategies try to assess the shortcut degrees and mitigate them from th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 70 publications
(140 reference statements)
0
4
0
Order By: Relevance
“…They proposed that high uniformity would lead to low tolerance, and make the learned model may push away two samples with similar semantics. Besides, the authors in the paper [49] studied how to utilize the popularity degree information to help the collaborative model automatically adjust the collaborative representations optimization intensity of any user-item pair [49]. The most related work to ours is SupCon [12].…”
Section: Self-supervised Learning Techniquementioning
confidence: 99%
“…They proposed that high uniformity would lead to low tolerance, and make the learned model may push away two samples with similar semantics. Besides, the authors in the paper [49] studied how to utilize the popularity degree information to help the collaborative model automatically adjust the collaborative representations optimization intensity of any user-item pair [49]. The most related work to ours is SupCon [12].…”
Section: Self-supervised Learning Techniquementioning
confidence: 99%
“…Given that propensity scores in IPS approaches can exhibit high variance, many studies [8,24] have turned to normalization or smoothing penalties to ensure model stability. Recent works [47,[66][67][68] have drawn inspiration from Stable Learning and Causal Inference. For instance, MACR [56] conducts counterfactual inference using a causal graph, postulating that popularity bias originates from the item node influencing the ranking score.…”
Section: A Appendix A1 Related Workmentioning
confidence: 99%
“…sDRO [57] integrates a Distributionally Robust Optimization (DRO) framework to minimize loss variances in long-tailed data distributions. PopGo [66] quantifies and reduces the interaction-wise popularity shortcut and is theoretically interpreted from both causality and information theory. However, devising these causal graphs and understanding the environmental context often hinge on heuristic insights from researchers.…”
Section: A Appendix A1 Related Workmentioning
confidence: 99%
See 1 more Smart Citation