Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval 2022
DOI: 10.1145/3477495.3531918
|View full text |Cite
|
Sign up to set email alerts
|

Dual Contrastive Network for Sequential Recommendation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 19 publications
(12 citation statements)
references
References 15 publications
0
9
0
Order By: Relevance
“…in Appendix A.4. 1 where Micro-video is a collected industrial dataset and Amazon is the public benchmark dataset which is widely used in existing work for sequential recommendation [19]. The detailed descriptions of them are as below.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…in Appendix A.4. 1 where Micro-video is a collected industrial dataset and Amazon is the public benchmark dataset which is widely used in existing work for sequential recommendation [19]. The detailed descriptions of them are as below.…”
Section: Methodsmentioning
confidence: 99%
“…Hyper-parameters are generally set following the default settings of baselines. We strictly follow existing work for sequential recommendation [19] and leverage Adam [15] with the learning rate of 0.0001 to weigh the gradients. The embedding sizes of all models are set as 32.…”
Section: 13mentioning
confidence: 99%
“…• Contrastive learning for long-term representation is effective. Compared with the results in the previous SIGIR version [21], improvement for SLi-Rec is promoted under SLi-Rec after introducing contrastive learning for long-term representation, which means though long and short-term modelings have provided SLi-Rec with sufficient prior knowledge our new design still can facilitate it into a better stage when it has been promoted with sufficient prior knowledge; when we achieve this improvement by performing contrastive learning on long-term representation with static embedding (i.e., target user or item embeddings), this observation also sustains our opinion in the previous version: "long term interest can be treated as static interest [21]".…”
Section: Study Of Encoder Backbonementioning
confidence: 99%
“…Therefore, we tend to enhance the sequential learning by incorporating user sequence for auxiliary contrastive training instead of generating sparser augment data by dropout strategy. Besides, we also have some new insights from the item-centric perspective for the overall performance as shown in Table 4 against the previous SIGIR version [21].…”
Section: Overall Performancementioning
confidence: 99%
See 1 more Smart Citation