Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery &Amp; Data Mining 2018
DOI: 10.1145/3219819.3220000
|View full text |Cite
|
Sign up to set email alerts
|

Learning Deep Network Representations with Adversarially Regularized Autoencoders

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
96
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 162 publications
(100 citation statements)
references
References 21 publications
1
96
0
Order By: Relevance
“…This indicates the effectiveness of adversarial learning, i.e., dynamically playing a mini-max game either implicitly (GraphVAT) and explicitly (GraphSGAN) in the training phase. Moreover, the results are consistent with findings in previous work [15], [35], [42], [49]. • Among the baselines, 1) the methods that jointly account for the graph structure and node features (in the category of +Node Features) outperform LP and DeepWalk that only consider graph structure.…”
Section: Model Comparisonsupporting
confidence: 88%
“…This indicates the effectiveness of adversarial learning, i.e., dynamically playing a mini-max game either implicitly (GraphVAT) and explicitly (GraphSGAN) in the training phase. Moreover, the results are consistent with findings in previous work [15], [35], [42], [49]. • Among the baselines, 1) the methods that jointly account for the graph structure and node features (in the category of +Node Features) outperform LP and DeepWalk that only consider graph structure.…”
Section: Model Comparisonsupporting
confidence: 88%
“…It avoids the problem that the LSTM network is not invariant to the permutation of node sequences. Network Representations with Adversarially Regularized Autoencoders (NetRA) [64] proposes a graph encoder-decoder framework with a general loss function, defined as L = −E z∼P data (z) (dist(z, dec(enc(z)))),…”
Section: A Network Embeddingmentioning
confidence: 99%
“…Graph-to-sequence learning learns to generate sentences with the same meaning given a semantic graph of abstract [22], [23], [25], [41], [43], [44], [45] [49], [50], [51], [53], [56], [61], [62] Citeseer [117] 1 3327 4732 3703 6 [22], [41], [43], [45], [50], [51], [53] [56], [61], [62] Pubmed [117] 1 19717 44338 500 3 [18], [22], [25], [41], [43], [44], [45] [49], [51], [53], [55], [56], [61], [62] [70], [95] DBLP (v11) [118] 1 4107340 36624464 -- [64], [70], [99] Biochemical Graphs PPI [119] 24 56944 818716 50 121 [18], [42], [43], [48], [45], [50]...…”
Section: Practical Applicationsmentioning
confidence: 99%
“…This work was later expanded upon by Micheli [12] and Scarselli et al [13] In 2013, the Graph Convolutional Network (GCN) was presented by Bruna et al [14] using the principles of spectral graph theory. Many other forms of GNN have been presented since then, including, but not limited to, Graph Attention Networks [15], Graph Autoencoders [16][17][18][19], and Graph Spatial-Temporal Networks [20][21][22][23].…”
Section: Introductionmentioning
confidence: 99%