Proceedings of the 29th ACM International Conference on Information &Amp; Knowledge Management 2020
DOI: 10.1145/3340531.3411954
|View full text |Cite
|
Sign up to set email alerts
|

S3-Rec: Self-Supervised Learning for Sequential Recommendation with Mutual Information Maximization

Abstract: Recently, significant progress has been made in sequential recommendation with deep learning. Existing neural sequential recommendation models usually rely on the item prediction loss to learn model parameters or data representations. However, the model trained with this loss is prone to suffer from data sparsity problem. Since it overemphasizes the final performance, the association or fusion between context data and sequence data has not been well captured and utilized for sequential recommendation. To tackl… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
298
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 422 publications
(299 citation statements)
references
References 19 publications
1
298
0
Order By: Relevance
“…As the research of self-supervised learning is still in its infancy, there are only several works combining it with recommender systems [24,44,45,64]. These efforts either mine self-supervision signals from future/surrounding sequential data [24,45], or mask attributes of items/users to learn correlations of the raw data [64]. However, these thoughts cannot be easily adopted to social recommendation where temporal factors and attributes may not be available.…”
Section: Self-supervised Learningmentioning
confidence: 99%
“…As the research of self-supervised learning is still in its infancy, there are only several works combining it with recommender systems [24,44,45,64]. These efforts either mine self-supervision signals from future/surrounding sequential data [24,45], or mask attributes of items/users to learn correlations of the raw data [64]. However, these thoughts cannot be easily adopted to social recommendation where temporal factors and attributes may not be available.…”
Section: Self-supervised Learningmentioning
confidence: 99%
“…However, the transitivity assumption of the messages is not always valid due to the existence of negative edges. Inspired by recent self-learning work with MI maximization [57,58] and knowledge graph embedding (KGE) [59][60][61], we propose a relation representation learning framework via signed graph mutual information maximization, and then use learned vector representation as the input of neural networks to perform the task of trust prediction.…”
Section: Proposed Methodsmentioning
confidence: 99%
“…This work conducts experiments on five common data sets collected from real-world platforms, which come from different domains and have different sparsity levels. To ensure that each user/item has enough interaction, we follow the preprocessing procedure in Zhou et al (2020), which only keeps the "5-core" data sets. This means that users and items with fewer than five interaction records are deleted.…”
Section: Data Setsmentioning
confidence: 99%
“…Yelp data set is collected by Yelp [3], which is the largest review site in the USA. We follow the preprocessing procedure in Zhou et al (2020) and use the transaction records after January 1st, 2019. In addition, we treat the categories of businesses as attributes.…”
Section: Data Setsmentioning
confidence: 99%