Proceedings of the 2022 SIAM International Conference on Data Mining (SDM) 2022
DOI: 10.1137/1.9781611977172.10
|View full text |Cite
|
Sign up to set email alerts
|

Structure-Enhanced Heterogeneous Graph Contrastive Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
20
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(20 citation statements)
references
References 0 publications
0
20
0
Order By: Relevance
“…We evaluate the performance of our model against various baselines from shallow graph representation learning algorithms, including DeepWalk 37 , Metapath2vec 38 , HIN2vec 39 , HERec 40 , to GCL methods (e.g., DGI 41 , GRACE 16 , DMGI 20 , STENCIL 22 , HeCo 21 ) to supervised GNNs, like GCN 42 , GAT 10 , HAN 36 . Note that DMGI, STENCIL, HeCo are three GCL models dedicated for HIN.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…We evaluate the performance of our model against various baselines from shallow graph representation learning algorithms, including DeepWalk 37 , Metapath2vec 38 , HIN2vec 39 , HERec 40 , to GCL methods (e.g., DGI 41 , GRACE 16 , DMGI 20 , STENCIL 22 , HeCo 21 ) to supervised GNNs, like GCN 42 , GAT 10 , HAN 36 . Note that DMGI, STENCIL, HeCo are three GCL models dedicated for HIN.…”
Section: Methodsmentioning
confidence: 99%
“…However, these methods do not consider the sampling bias inherent in GCL, inevitably leading to sub-optimality. STENCIL 22 and HeCo 21 treat the aggregation of metapath-induced subgraphs as a novel view, and propose to apply metapath similarity to measure the hardness between nodes to synthesize hard negatives. However, they still fail to model the consistency and complementarity between metapath views.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…where 𝑑 − 𝑖 ∈ D − 𝑖 are the in-batch negatives, and 𝜏 = 0.01 is the parameter for temperature. Contrastive pretraining improves both the alignment and uniformity for embeddings of sequences [14,21,27,28,39], which can better support the retrieval task. □ Document Retrieval using Label Names.…”
Section: Stage-i: Dense Retrieval With Label Namesmentioning
confidence: 99%
“…Graph-based Negative Sampling Methods. Existing graph-based methods for negative sampling mostly concentrate on collaborative filtering (CF)-based recommendations [6,24], graph contrastive learning [21,23,28,28]. MixGCF [6] efficiently injects information of positive samples into negative samples via a mix-up mechanism.…”
Section: Related Workmentioning
confidence: 99%