Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval 2023
DOI: 10.1145/3539618.3591723
|View full text |Cite
|
Sign up to set email alerts
|

Graph Transformer for Recommendation

Abstract: This paper presents a novel approach to representation learning in recommender systems by integrating generative self-supervised learning with graph transformer architecture. We highlight the importance of high-quality data augmentation with relevant selfsupervised pretext tasks for improving performance. Towards this end, we propose a new approach that automates the self-supervision augmentation process through a rationale-aware generative SSL that distills informative user-item interaction patterns. The prop… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 38 publications
0
2
0
Order By: Relevance
“…For example, SGL [8] employs lossy random node/edge operations to generate contrastive learning views, while LightGCL [12] generates contrastive learning views through singular value decomposition (SVD). In addition, some studies [10,11,28] have suggested that manually designed contrastive views may not be able to adapt to different data scenarios and downstream tasks. Therefore, some models [10,29,30] have been improved to address this issue and have proposed methods for automatically generating comparison views.…”
Section: Self-supervised Contrastive Learning For Recommendationmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, SGL [8] employs lossy random node/edge operations to generate contrastive learning views, while LightGCL [12] generates contrastive learning views through singular value decomposition (SVD). In addition, some studies [10,11,28] have suggested that manually designed contrastive views may not be able to adapt to different data scenarios and downstream tasks. Therefore, some models [10,29,30] have been improved to address this issue and have proposed methods for automatically generating comparison views.…”
Section: Self-supervised Contrastive Learning For Recommendationmentioning
confidence: 99%
“…However, self-supervised learning relies on manually designed self-supervised tasks or signals to generate training data [10,11]. If an inappropriate self-supervised task is chosen or the supervised signals are not rich enough, the model may not be able to accurately extract global and local information, thus learning irrelevant features or local patterns while ignoring the global structure [12].…”
Section: Introductionmentioning
confidence: 99%