2020
DOI: 10.1609/aaai.v34i04.5929
|View full text |Cite
|
Sign up to set email alerts
|

Independence Promoted Graph Disentangled Networks

Abstract: We address the problem of disentangled representation learning with independent latent factors in graph convolutional networks (GCNs). The current methods usually learn node representation by describing its neighborhood as a perceptual whole in a holistic manner while ignoring the entanglement of the latent factors. However, a real-world graph is formed by the complex interaction of many latent factors (e.g., the same hobby, education or work in social network). While little effort has been made toward explori… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
28
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 59 publications
(29 citation statements)
references
References 10 publications
1
28
0
Order By: Relevance
“…To evaluate the performance of ADGCN, we choose to compare with several baselines, especially those that have achieved the state-of-the-art results recently. In addition to GCN [12] and GAT [33], which have been demonstrated excellent performance on graph-based tasks, we also compared the most related state-of-the-art works in disentangled graph learning such as DisenGCN [20] and IPGDN [17].…”
Section: Comparison Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…To evaluate the performance of ADGCN, we choose to compare with several baselines, especially those that have achieved the state-of-the-art results recently. In addition to GCN [12] and GAT [33], which have been demonstrated excellent performance on graph-based tasks, we also compared the most related state-of-the-art works in disentangled graph learning such as DisenGCN [20] and IPGDN [17].…”
Section: Comparison Methodsmentioning
confidence: 99%
“…Specifically, the number K of the intermediate layers is set to gradually decrease to learn the hierarchical disentangled representation, and we use skip-connection to preserve the representation of different levels. As for the output dimension of first layer, it is set to K (1) • ∆d = 128, where K (1) ∈ [36] 59.0 59.6 71.1 LP [39] 68.0 45.3 63.0 DeepWalk [26] 67.2 43.2 65.3 ICA [6] 75.1 69.1 73.9 Planetoid [37] 75.7 64.7 77.2 ChebNet [5] 81.2 69.8 74.4 GCN [12] 81.5 70.3 79.0 MoNet [23] 81.7 -78.8 GAT [33] 83.0 72.5 79.0 DisenGCN [20] 83.7 73.4 80.5 IPGDN [17] 84 . As hyper-parameters, both K (1) and ∆K are searched through hyperopt [2].…”
Section: Implementation Detailsmentioning
confidence: 99%
See 1 more Smart Citation
“…Since different types of connections are generally treated as plain edges, GCNs individually represent each type of connection and aggregate them, which leads to redundant representations. Independence Promoted Graph Disentangled Network (IPGDN) [76] distinguishes the neighborhood into different parts and automatically discovers the nuances of graph's independent latent features, so that reduces the difficulty in detecting communities. IPGDN is supported by Hilbert-Schmidt Independence Criterion (HSIC) regularization [77] in neighborhood routings.…”
Section: B Gcn-based Community Detectionmentioning
confidence: 99%
“…Most recent works are based on the autoencoder architecture, where the latent features generated by the encoder are constrained to be independent in each dimension. The works of DisenGCN (Ma et al 2019) and IPGDN (Liu et al 2020), as pioneering attempts, achieve node-level disentanglement through neighbor routines that divide the neighbors of a node into several mutually exclusive parts. FactorGCN (Yang et al 2020), on the other hand, performs relation disentanglement by taking into account global topological semantics.…”
Section: Related Workmentioning
confidence: 99%