Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval 2019
DOI: 10.1145/3331184.3331230
|View full text |Cite
|
Sign up to set email alerts
|

Lifelong Sequential Modeling with Personalized Memorization for User Response Prediction

Abstract: User response prediction, which models the user preference w.r.t. the presented items, plays a key role in online services. With twodecade rapid development, nowadays the cumulated user behavior sequences on mature Internet service platforms have become extremely long since the user's first registration. Each user not only has intrinsic tastes, but also keeps changing her personal interests during lifetime. Hence, it is challenging to handle such lifelong sequential modeling for each individual user. Existing … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 80 publications
(19 citation statements)
references
References 52 publications
0
19
0
Order By: Relevance
“…For all datasets, we aggregate each user's interaction records and sort them by the action timestamps in chronological order. For evaluation purpose, we adopt the data split strategy in [33], [53]. Specifically, suppose a user has L historical behaviors sorted by time, the behavior sequence [1, L − 3] is used for training and predicts whether she/he will interact with the (L−2)-th item.…”
Section: Methodsmentioning
confidence: 99%
“…For all datasets, we aggregate each user's interaction records and sort them by the action timestamps in chronological order. For evaluation purpose, we adopt the data split strategy in [33], [53]. Specifically, suppose a user has L historical behaviors sorted by time, the behavior sequence [1, L − 3] is used for training and predicts whether she/he will interact with the (L−2)-th item.…”
Section: Methodsmentioning
confidence: 99%
“…In point-wise approaches, many SOTA models focus on feature interactions design, such as DeepFM [26], PNN [27], DCN [28], xDeepFM [29], DCN-M [6] and AutoInt [5]. Other models try to capture users' sequential interest patterns, including GRU4Rec [30], Caser [31], SAS-Rec [32], MIMN [33], HPMN [34], DIN [35] and DIEN [36]. In pair-wise approaches, SVMRank [37] is the pioneer to transform ranking tasks to classification tasks.…”
Section: Related Work a Learning-to-rank Approachesmentioning
confidence: 99%
“…And similarly the Recurrent Recommender Network (RRN) [62] proposed to employ LSTM [22] to capture the dynamics of both users and items. Complementary to the RNN-based methods, Huang et al [25] and Ren et al [44] alternatively adopted memory networks [13], [55] to learn the short-term patterns, together with item attributes. In the meanwhile, the Sequential Hierarchical Attention Network (SHAN) [64] and [26], [67] introduced the attention mechanism to automatically assign different influences of items in a user's long-term set so that the dynamic properties can be captured, then relied on another attention layer to couple user sequential behavior with long-term representation.…”
Section: Related Workmentioning
confidence: 99%
“…However, existing RNN-based methods [62], [65], [68], which exhibit the advantage by capturing temporal or sequential user behaviors, may not be suitable for modeling and capturing the long-term impact of previously purchased items on the future one [9]. To remedy this, many works [9], [25], [26], [44], [60], [64], [67] proposed to use external memory networks (EMN) [55], [61] and attention mechanism [11], [57], where shorter path between any positions in the sequence makes the long-term impact easier to learn. Despite their effectiveness, the ordinal information of historical items is usually not explicitly considered in these works, which may lead to suboptimal performance because the sequential patterns contained in the sequence of user behaviors may be neglected.…”
Section: Introductionmentioning
confidence: 99%