2020
DOI: 10.48550/arxiv.2009.13724
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

One Person, One Model, One World: Learning Continual User Representation without Forgetting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 36 publications
0
6
0
Order By: Relevance
“…To begin with, we first train an DL-based sequential recommendation model on a large source dataset and use it as our pre-trained model. This practice has been well verified by recent studies in [11], [12] (as well as in the NLP domains [13], [14], [15]) since sequential neural network can be trained in a selfsupervised manner and thus could generate more universal user representations for effective transfer learning. In this paper, we choose the temporal CNN model NextItNet [2] as the pre-trained backbone model given its efficient network structure [16], [17], superior performance [2], [18], [19] and widespread usage in modeling sequential recommendation data [11], [12], [19], [20], [21], [22] in recent literature.…”
Section: Introductionmentioning
confidence: 60%
See 4 more Smart Citations
“…To begin with, we first train an DL-based sequential recommendation model on a large source dataset and use it as our pre-trained model. This practice has been well verified by recent studies in [11], [12] (as well as in the NLP domains [13], [14], [15]) since sequential neural network can be trained in a selfsupervised manner and thus could generate more universal user representations for effective transfer learning. In this paper, we choose the temporal CNN model NextItNet [2] as the pre-trained backbone model given its efficient network structure [16], [17], superior performance [2], [18], [19] and widespread usage in modeling sequential recommendation data [11], [12], [19], [20], [21], [22] in recent literature.…”
Section: Introductionmentioning
confidence: 60%
“…They also presented several methodologies to accelerate the training and inference processes of NextItNet. More recently, PeterRec [11] and Conure [12] demonstrated that the learned user representations by NextItNet were generic and could be transferred to solve various downstream recommendation problems. Meanwhile, self-attention based models, such as SASRec [4] and BERT4Rec [25], also showed competitive performance for the SRS task.…”
Section: Sequential Recommender Systems (Srs)mentioning
confidence: 99%
See 3 more Smart Citations