2021
DOI: 10.3389/fdata.2021.602071
|View full text |Cite
|
Sign up to set email alerts
|

Knowledge Transfer via Pre-training for Recommendation: A Review and Prospect

Abstract: Recommender systems aim to provide item recommendations for users and are usually faced with data sparsity problems (e.g., cold start) in real-world scenarios. Recently pre-trained models have shown their effectiveness in knowledge transfer between domains and tasks, which can potentially alleviate the data sparsity problem in recommender systems. In this survey, we first provide a review of recommender systems with pre-training. In addition, we show the benefits of pre-training to recommender systems through … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
22
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4
1

Relationship

3
6

Authors

Journals

citations
Cited by 28 publications
(23 citation statements)
references
References 52 publications
1
22
0
Order By: Relevance
“…Future Modeling in Recommendation. There are some works that indirectly consider future behaviors in sequence-based recommendation [40]. These models are mostly inspired by the masked language model (MLM) of pre-training [7].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Future Modeling in Recommendation. There are some works that indirectly consider future behaviors in sequence-based recommendation [40]. These models are mostly inspired by the masked language model (MLM) of pre-training [7].…”
Section: Related Workmentioning
confidence: 99%
“…There are very few works that consider future information in recommendation. Recently, with the thriving of pre-training, some works introduce the masked language model (MLM) pre-training task in NLP [7] to sequence-based recommendation, which consider bidirectional information to learn better sequential models via selfsupervised learning (SSL) [26,40]. The MLM task randomly masks some items in the user behavior sequence during training, and then attempts to predict them with the remaining unmasked contexts [38,39].…”
mentioning
confidence: 99%
“…Sequence-based recommendation mainly leverages users' sequential behavior to mine users' interests, which focuses on individual information. Recently, various deep neural net-works have been employed for sequence-based recommendation, e.g., RNN [7], memory networks [3], attention mechanisms [23,35,15,30] and mixed models [29,20].…”
Section: Related Workmentioning
confidence: 99%
“…(2) Text CNN [9] encodes utterances in the current session to learn user preferences by CNN-based model. (3) BERT [8] is a pretraining [34] model that encodes current utterances for recommendation. (4) ReDial [14] is a CRS method which adopts an auto-encoder framework.…”
Section: Datasetsmentioning
confidence: 99%