2024
DOI: 10.1109/tnnls.2022.3232709
|View full text |Cite
|
Sign up to set email alerts
|

HGBER: Heterogeneous Graph Neural Network With Bidirectional Encoding Representation

Abstract: Heterogeneous graph neural networks (HGNNs) as an emerging technique have shown superior capacity of dealing with heterogeneous information network (HIN). However, most HGNNs follow a semi-supervised learning manner, which notably limits their wide use in reality since labels are usually scarce in real applications. Recently, contrastive learning, a self-supervised method, becomes one of the most exciting learning paradigms and shows great potential when there are no labels. In this paper, we study the problem… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
3
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 50 publications
0
3
0
Order By: Relevance
“…Expressive power for node classification and link prediction. Due to the dominant WL algorithm for graph ✓ not more powerful than k-WL Maron et al [67] ✓ as powerful as k-WL Chen et al [68] ✓ more powerful than 1-WL Morris et al [71] ✓ not more powerful than k-WL Morris et al [72] ✓ not more powerful than k-WL Zhao et al [73] ✓ not more powerful than k-WL Substructure-based GNNs Barcelo et al [78] ✓ less powerful than k-WL Bodnar et al [82] ✓ more powerful than 1-WL Bodnar et al [81] ✓ more powerful than 1-WL Thiede et al [105] ✓ not more powerful than 1-WL Horn et al [79] ✓ more powerful than 1-WL Toenshoff et al [80] ✓ ✓ incomparable to 1-WL Bouritsas et al [77] ✓ more powerful than 1-WL Choi et al [128] ✓ incomparable to WL hierarchy Distance-based GNNs Li et al [87] ✓ ✓ more powerful than 1-WL Zhang et al [25] ✓ ✓ as powerful as 3-WL Nikolentzos et al [89] ✓ not more powerful than 3-WL Feng et al [90] ✓ more powerful than 1-WL Wang et al [74] ✓ incomparable to k-WL Subgraph GNNs…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Expressive power for node classification and link prediction. Due to the dominant WL algorithm for graph ✓ not more powerful than k-WL Maron et al [67] ✓ as powerful as k-WL Chen et al [68] ✓ more powerful than 1-WL Morris et al [71] ✓ not more powerful than k-WL Morris et al [72] ✓ not more powerful than k-WL Zhao et al [73] ✓ not more powerful than k-WL Substructure-based GNNs Barcelo et al [78] ✓ less powerful than k-WL Bodnar et al [82] ✓ more powerful than 1-WL Bodnar et al [81] ✓ more powerful than 1-WL Thiede et al [105] ✓ not more powerful than 1-WL Horn et al [79] ✓ more powerful than 1-WL Toenshoff et al [80] ✓ ✓ incomparable to 1-WL Bouritsas et al [77] ✓ more powerful than 1-WL Choi et al [128] ✓ incomparable to WL hierarchy Distance-based GNNs Li et al [87] ✓ ✓ more powerful than 1-WL Zhang et al [25] ✓ ✓ as powerful as 3-WL Nikolentzos et al [89] ✓ not more powerful than 3-WL Feng et al [90] ✓ more powerful than 1-WL Wang et al [74] ✓ incomparable to k-WL Subgraph GNNs…”
Section: Discussionmentioning
confidence: 99%
“…Zhang and Chen [86] first leverage the shortest path distances between target nodes and other nodes to enhance the link prediction performance of GNNs in their SEAL algorithm. Li et al [87] generalize it into distance encoding (DE) defined by random walks to learn structural representation and overcome the limitation of 1-WL. Specifically, they utilize the shortest path distance and generalized PageRank Scores [88] as the measurement of DE, which are further served as extra features or controllers of message aggregation to devise powerful GNN architecture named DE-GNN and DEA-GNN.…”
Section: Graph Property Based Gnnsmentioning
confidence: 99%
“…Inspired by several solutions in graph machine learning (Dwivedi et al, 2020;Li et al, 2020b), we introduce two types of node features, as the initializations of node embeddings h 0 i , to improve the expressiveness of our GNN model and facilitate the topological feature learning. We use two complementary network features, namely the graph Laplacian positional embeddings, which encode a node's position with respect to other nodes in the network, and the diffusion-based embeddings, which capture a node's distance to other nodes in random walks.…”
Section: Network Features Of Proteinsmentioning
confidence: 99%
“…Distance embeddings: Random walk or PageRank-based algorithms have been widely used to learn network embeddings (Perozzi et al, 2014;Grover and Leskovec, 2016) and improve expressiveness of GNNs (Li et al, 2020b). For example, the distance matrix at the equilibrium states of a random walk with restart has been used to encode the topological roles of genes or proteins in molecular networks (Cowen et al, 2017;Cho et al, 2016).…”
Section: Network Features Of Proteinsmentioning
confidence: 99%
“…Zhang and Chen [11] proposed SEAL to further enhance structural information, which uses graph neural networks instead of fully connected neural networks with better learning ability of graph features.SEAL first labels nodes based on their distances to source and target nodes in each enclosing subgraph, using DRNL (double-radius node labeling) node labeling and Node2vec node embedding features, and then applying DGCNN (deep graph convolutional neural network) [24] to learn link representations for link prediction.Li et al [12] introduced another node labeling technique, similar to SEAL, proposing distance encoding (DE) as an additional node feature, which directly uses the shortest path distance to the target node (SPD ). you et al [25] proposed the location-aware GNN (P-GNN) for computing location-aware node embeddings.…”
Section: Related Workmentioning
confidence: 99%