Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery &Amp; Data Mining 2020
DOI: 10.1145/3394486.3403221
|View full text |Cite
|
Sign up to set email alerts
|

Interpretable Deep Graph Generation with Node-edge Co-disentanglement

Abstract: Disentangled representation learning has recently attracted significant amount of attentions, particularly in the field of image representation learning. However, learning the disentangled representations behind a graph remains largely unexplored, especially for the attributed graph with both node and edge features. Disentanglement learning for graph generation has substantial new challenges including: 1) the lack of graph deconvolution operations to jointly decode node and edge attributes; and 2) the difficul… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 27 publications
(24 citation statements)
references
References 20 publications
0
15
0
Order By: Relevance
“…Examples of such interpretable factors include color of an object in an image, or the presence of a smile on a face [66]. This can be quantified by measuring the mutual information between a dimension and a concept [89]. Ideally, a dimension has high mutual information with a single concept and zero mutual information with all other concepts.…”
Section: Functionally Evaluating Covariate Complexitymentioning
confidence: 99%
See 1 more Smart Citation
“…Examples of such interpretable factors include color of an object in an image, or the presence of a smile on a face [66]. This can be quantified by measuring the mutual information between a dimension and a concept [89]. Ideally, a dimension has high mutual information with a single concept and zero mutual information with all other concepts.…”
Section: Functionally Evaluating Covariate Complexitymentioning
confidence: 99%
“…Others vary exactly one dimension and keep all others fixed. Then, the output variance per concept can be measured [66], or the accuracy of a classifier that should predict the index of the specific factor of variation (ceteris paribus) [89]. Another approach, originally suggested by [129], is to generate data while keeping exactly one covariate fixed and varying the others, and evaluate whether the variance in one dimension is exactly zero [25,156].…”
Section: Functionally Evaluating Covariate Complexitymentioning
confidence: 99%
“…The goal of disentangled representation learning is to separate underlying semantic factors accounting for the variation of the data in the learned representation. This disentangled representation has been shown to be resilient to the complex factors involved (Bengio et al, 2013), and can enhance the generalizability and improve robustness against adversarial attack (Guo et al, 2020a;Alemi et al, 2016). Intuitively, disentangled representation learning can achieve superior interpretability over regular graph representation learning tasks to better understand the graphs in various domains (Guo et al, 2020b).…”
Section: Disentangled Representation Learningmentioning
confidence: 99%
“…Intuitively, disentangled representation learning can achieve superior interpretability over regular graph representation learning tasks to better understand the graphs in various domains (Guo et al, 2020b). This motivates the surge of a few VAEbased approaches that modify the VAE objective by adding, removing or adjusting the weight of individual terms for deep graph generation tasks (Guo et al, 2020a;Chen et al, 2018;Kim & Mnih, 2018). Disentangled representation learning is important in modeling periodical graphs where the process requires to distinguish the repeatable patterns from the others.…”
Section: Disentangled Representation Learningmentioning
confidence: 99%
“…Disentangled representations have been successfully found for image data (Burgess et al, 2019;van Steenkiste et al, 2019;Leeb et al, 2020;Besserve et al, 2019). However, disentanglement of graph data is largely unexplored, with a few exceptions (Ma et al, 2019;Guo et al, 2020;Stoehr et al, 2019).…”
Section: Introductionmentioning
confidence: 99%