Proceedings of the ACM Web Conference 2022 2022
DOI: 10.1145/3485447.3512156
|View full text |Cite
|
Sign up to set email alerts
|

SimGRACE: A Simple Framework for Graph Contrastive Learning without Data Augmentation

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
90
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 146 publications
(105 citation statements)
references
References 10 publications
0
90
0
Order By: Relevance
“…Later, InfoGCL [28] diminishes the mutual information between contrastive parts among views while preserving the task-relevant representation. Beyond augmenting graphs, SimGRACE [15] disturbs the model weights and then learns the invariant high-level representation at the output end to alleviate the design of graph augmentation.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Later, InfoGCL [28] diminishes the mutual information between contrastive parts among views while preserving the task-relevant representation. Beyond augmenting graphs, SimGRACE [15] disturbs the model weights and then learns the invariant high-level representation at the output end to alleviate the design of graph augmentation.…”
Section: Related Workmentioning
confidence: 99%
“…Discussion. From the perspective of information usage for model training, our proposed method is the same as the semi-supervised learning task by recent graph contrastive learning methods [11,12,14,15], which use structure information of all graphs and label information of a subset of all graphs for model training. From the perspective of training strategy, the previous methods first pre-train a model via a contrastive loss and then fine-tune the model for downstream tasks.…”
Section: Label-invariant Augmentationmentioning
confidence: 99%
See 2 more Smart Citations
“…Recent advancements in representation learning have been driven by the SSL paradigm, where the goal is to ensure representations have high similarity between positive views of a sample and high dissimilarity between negative views. Existing SSL frameworks can be broadly categorized based on the mechanism adopted for enforcing this consistency: contrastive learning (CL) frameworks [1,8,7,22,29,31,32], such as GraphCL [22], use the InfoNCE loss; approaches that rely only on positive pairs, such as SimSiam [2] and BGRL [24] use Siamese architectures with stop gradient [2] and asymmetric branches [21] respectively; SpecCL [15] uses a spectral clustering loss (SpecLoss) to enforce consistency; others attempt to directly reduce redundancy between views [3,33]. Despite these differences, all methods rely upon data augmentation to generate positive views, which are assumed to share semantics.…”
Section: Introductionmentioning
confidence: 99%