2021
DOI: 10.48550/arxiv.2112.06460
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Sequential Recommendation with Bidirectional Chronological Augmentation of Transformer

Abstract: Sequential recommendation can capture user chronological preferences from their historical behaviors, yet the learning of short sequences is still an open challenge. Recently, data augmentation with pseudo-prior items generated by transformers has drawn considerable attention in improving recommendation in short sequences and addressing the cold-start problem. These methods typically generate pseudo-prior items sequentially in reverse chronological order (i.e., from the future to the past) to obtain longer seq… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 32 publications
0
3
0
Order By: Relevance
“…Recent works regarding the aforementioned approaches focus on time-aware approaches, which better retain time coherence between the augmented sequences and the original ones, and further enhance the model's performance [6,32]. Except for the aforementioned basic approaches, some data augmentation approaches choose to create highly plausible sequences by synthesizing and injecting/prepending fake samples into the original sequence [12,15,28], or modeling counterfactual data distribution [47,58]. Apart from being applied in the sequential recommendation, data augmentation techniques are also applied in collaborative filtering to alleviate the data sparsity problem [43] or bypass negative sampling [21] during training.…”
Section: Data Augmentation For Recommendationmentioning
confidence: 99%
“…Recent works regarding the aforementioned approaches focus on time-aware approaches, which better retain time coherence between the augmented sequences and the original ones, and further enhance the model's performance [6,32]. Except for the aforementioned basic approaches, some data augmentation approaches choose to create highly plausible sequences by synthesizing and injecting/prepending fake samples into the original sequence [12,15,28], or modeling counterfactual data distribution [47,58]. Apart from being applied in the sequential recommendation, data augmentation techniques are also applied in collaborative filtering to alleviate the data sparsity problem [43] or bypass negative sampling [21] during training.…”
Section: Data Augmentation For Recommendationmentioning
confidence: 99%
“…Most existing works attempt to learn user preference as a transition pattern from the sequential data, i.e., how the previous item can be properly transited to the next item. The ordering of interacted items matters a lot in this task (Jiang et al 2021). Typical networks such as Transformer are applied to model sequential data and produce recommendations (Kang and McAuley 2018;Sun et al 2019;Li, Wang, and McAuley 2020;Ma et al 2020;Jiang et al 2021).…”
Section: Introductionmentioning
confidence: 99%
“…The ordering of interacted items matters a lot in this task (Jiang et al 2021). Typical networks such as Transformer are applied to model sequential data and produce recommendations (Kang and McAuley 2018;Sun et al 2019;Li, Wang, and McAuley 2020;Ma et al 2020;Jiang et al 2021).…”
Section: Introductionmentioning
confidence: 99%
“…Recently, Transformer has drawn considerable interest in modeling sequential data [9,22,24,25,46]. Self-attention with global feature interaction outperforms RNNs [32,57] in many tasks to capture personalized patterns effectively [8,50], since it allows feature interaction between each item in the sequence as an even stronger global inductive bias than the RNNs.…”
Section: Introductionmentioning
confidence: 99%