Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval 2022
DOI: 10.1145/3477495.3531937
|View full text |Cite
|
Sign up to set email alerts
|

Are Graph Augmentations Necessary?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
132
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 208 publications
(132 citation statements)
references
References 16 publications
0
132
0
Order By: Relevance
“…SGL [34] models user-item interactions as a graph and builds multiple views by conducting node dropout, edge dropout, and random walk on the user-item graph. Similarly, Similarly, Yu et al [40] propose a simple CL method to perturb the graph embedding space with uniform noise to build contrastive views for the training of recommendation model. At the same time, some researchers model user-item interactions with hypergraph, and conduct CL over augmented hypergraph views for the user/item representation learning [35,39,42].…”
Section: Contrastive Learning In Recommendationmentioning
confidence: 99%
“…SGL [34] models user-item interactions as a graph and builds multiple views by conducting node dropout, edge dropout, and random walk on the user-item graph. Similarly, Similarly, Yu et al [40] propose a simple CL method to perturb the graph embedding space with uniform noise to build contrastive views for the training of recommendation model. At the same time, some researchers model user-item interactions with hypergraph, and conduct CL over augmented hypergraph views for the user/item representation learning [35,39,42].…”
Section: Contrastive Learning In Recommendationmentioning
confidence: 99%
“…The multimodal features are fed into different modality encoders. The modality encoders extract the representations and are general architectures used in other fields, such as ViT [13] for images and General [34] Coarse-grained Attention CL [40] Coarse-grained Attention None [6], [21] Fine-gained Attention None [30], [27], [57] Combined Attention None [44], [39] User-item Graph + Fine-gained Attention None [56] User-item Graph CL [59] Item-item Graph CL [58], [38] Item-item Graph None [33] Item-item Graph + Fine-gained Attention None [50], [45] Knowledge Graph None [2], [46] Knowledge Graph CL [8] Knowledge Graph + Fine-gained Attention None [43] Knowledge Graph + Filtration (graph) None [63], [55], [31] Filtration (graph) None [49], [4] MLP / Concat DRL [15], [28] Fine-gained Attention DRL [61], [36], [48] None DRL…”
Section: Procedures Of Mrsmentioning
confidence: 99%
“…The widely-used similarity functions include inner product [14] and neural network [11]. As suggested by recent work [19,33,36], the inner product supports highly efficient retrieval and usually exhibits stronger performance. Thus, for convenience, this work simply takes the representative inner product for analyses, i.e., model prediction can be expressed as ŷ𝑢𝑖 = e ⊤ 𝑢 e 𝑖 .…”
Section: Preliminariesmentioning
confidence: 99%
“…Meanwhile, we compare the model with various types of SOTA models, range from SGL [33], SimpleX [21], SimSGL [36], NCL [19]. We also report the distribution of the learned personalized 𝜏 for Adap-𝜏 and Cu-𝜏 as marked by the orange and blue regions.…”
Section: 12mentioning
confidence: 99%
See 1 more Smart Citation