2018
DOI: 10.48550/arxiv.1802.04407
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adversarially Regularized Graph Autoencoder for Graph Embedding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
37
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 50 publications
(59 citation statements)
references
References 14 publications
0
37
0
Order By: Relevance
“…1-A). Drawing inspiration from ARVGA [13], our framework is composed of a variational autoencoder A align and a discriminator D align . The A align comprises a probabilistic encoder that encodes an input as a distribution over the latent space instead of encoding an input as a single point.…”
Section: Proposed Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…1-A). Drawing inspiration from ARVGA [13], our framework is composed of a variational autoencoder A align and a discriminator D align . The A align comprises a probabilistic encoder that encodes an input as a distribution over the latent space instead of encoding an input as a single point.…”
Section: Proposed Methodsmentioning
confidence: 99%
“…To address the challenges above and motivated by the recent development of graph neural network-based solutions, we propose a multi-resolution Stairway-GraphNet (SG-Net) method to jointly predict and super-resolve a target graph modality based on a given modality in both inter-and intra-domains. To do so, prior to the prediction blocks, we propose an inter-modality aligner network based on adversarially regularized variational graph autoencoder (ARVGA) [13] to align the training graphs of the source modality to that of the target one. Second, given the aligned source graphs, we design an inter-modality superresolution graph GAN (gGAN) to map the aligned source graph from one modality (e.g., morphological) to the target modality (e.g., functional).…”
Section: Introductionmentioning
confidence: 99%
“…Metrics. Following [12,30], we use accuracy (ACC), normalized mutual information (NMI), precision, F-score(F1) and average rand index (ARI) as our evaluation metrics. We conduct 10 repeat experiments.…”
Section: Node Clusteringmentioning
confidence: 99%
“…Also, graph autoencoder (GAE) and varitional graph auto-encoder (VGAE) have been proposed which using the GCN as encoder and the inner product between node embeddings as decoder in auto-encoder or variational auto-encoder framework for unsupervised representation learning [18], [30]. Based on GAE and VGAE, Pan et al [15] have proposed ARGA and ARVGA with adversarial training scheme where the latent node representations are enforced to match a prior distribution. Wang et al [16] have proposed MGAE which is a GAE with a marginalization process to disturb the network attribute information.…”
Section: Deep Learning For Network Clusteringmentioning
confidence: 99%
“…To handle the challenges, many network embedding and graph neural network related methods have been developed recently for node representation learning to improve the accuracy of downstream applications like graph classification, link prediction and graph clustering. The representative methods include DeepWalk [9], Line [10], node2vec [11], struc2vec [12], GCN [13], GraphSAGE [14], ARGA/ARVGA [15], MGAE [16], AGC [8], etc. For example, DeepWalk [9] and Node2Vec [11] are two representative network embedding methods to learn low-dimensional representations for nodes through the local neighborhood structure prediction.…”
Section: Introductionmentioning
confidence: 99%