2022
DOI: 10.1016/j.knosys.2021.107611
|View full text |Cite
|
Sign up to set email alerts
|

Megnn: Meta-path extracted graph neural network for heterogeneous graph representation learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 55 publications
(17 citation statements)
references
References 18 publications
0
10
0
Order By: Relevance
“…Using the message passing paradigm of GNN through trainable convolutions, Megnn can optimize and extract effective metapaths for heterogeneous graph representation learning. A large number of experimental results on three datasets not only demonstrate the effectiveness of Megnn's method compared with the latest method but also prove that the extracted metapath has good interpretability [9]. Lei et al used source task models to help reduce the training cost of target task models in order to reduce the neural network training cost.…”
Section: Related Workmentioning
confidence: 97%
“…Using the message passing paradigm of GNN through trainable convolutions, Megnn can optimize and extract effective metapaths for heterogeneous graph representation learning. A large number of experimental results on three datasets not only demonstrate the effectiveness of Megnn's method compared with the latest method but also prove that the extracted metapath has good interpretability [9]. Lei et al used source task models to help reduce the training cost of target task models in order to reduce the neural network training cost.…”
Section: Related Workmentioning
confidence: 97%
“…Large-scale heterogeneous graph neural networks. Many heterogeneous GNN structures are proposed in recent years [16,39,106,126,137]. However, few distributed systems take the unique characteristic of heterogeneous graphs into consideration to support heterogeneous GNNs.…”
Section: Future Directionmentioning
confidence: 99%
“…Since GCA [56] is for node classification, we extend it to the subgraph classification task by adding an average pooling layer. For SubGNN [1], we use the optimal model hyperparameters suggested in the official source code 5 .…”
Section: Experimental Settingsmentioning
confidence: 99%
“…Powerful GRLs are usually built on large amounts of training data with supervised signals (labels), but real-world applications face the problem of lacking labeled data. Recent studies notice this challenge in GRL and have proposed some approaches for data-efficient learning on graphs (GEL) [5,18,41,52,53]. However, existing GEL methods are node-level [54], edge-level [17], or graph-level [32].…”
Section: Introductionmentioning
confidence: 99%