2021
DOI: 10.1109/tkde.2019.2961882
|View full text |Cite
|
Sign up to set email alerts
|

Learning Graph Representation With Generative Adversarial Nets

Abstract: The goal of graph representation learning is to embed each vertex in a graph into a low-dimensional vector space. Existing graph representation learning methods can be classified into two categories: generative models that learn the underlying connectivity distribution in the graph, and discriminative models that predict the probability of edge existence between a pair of vertices. In this paper, we propose Graph-GAN, an innovative graph representation learning framework unifying above two classes of methods, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
266
0
3

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 219 publications
(269 citation statements)
references
References 39 publications
(18 reference statements)
0
266
0
3
Order By: Relevance
“…and detecting fake examples. Recently, several GAN-based models are proposed to learn graph embeddings, which either generate fake nodes and edges to augment embedding learning [39], [40] or smooth the leaned embeddings to follow a prior distribution [41]- [44]. However, using two different networks inevitably doubles the computation of model training and the labor of parameter tuning of GAN-based methods.…”
Section: Generative Adversarial Networkmentioning
confidence: 99%
“…and detecting fake examples. Recently, several GAN-based models are proposed to learn graph embeddings, which either generate fake nodes and edges to augment embedding learning [39], [40] or smooth the leaned embeddings to follow a prior distribution [41]- [44]. However, using two different networks inevitably doubles the computation of model training and the labor of parameter tuning of GAN-based methods.…”
Section: Generative Adversarial Networkmentioning
confidence: 99%
“…Introducing high-order neighborhood information has shown effective [23,25,28] in graph-based recommendation, thus we introduce graph convolution network (GCN) [15] and graph attention network (GAT) [24] to encode high-order neighborhood information for NI model. Graph convolution network computes high-order node representations by stacking several graph convolution layers.…”
Section: Integrating High-order Neighborhood Informationmentioning
confidence: 99%
“…The main idea of adversarial learning is to simulate a minimax game with the generator attempting to imitate the genuine data distribution while the discriminator aiming to differentiate fake examples from the real data. A few pioneering works [20], [21], [36]- [41] have explored the adversarial learning in recommender systems. IRGAN [20] is the first influential IR model constructed based on GAN.…”
Section: B Adversarial Training In Recommender Systemsmentioning
confidence: 99%
“…Feed the obtained vector into the second Gumbel-Softmax layer; 12 Get a one-hot vector representing the item z; 13 Deliver z to the discriminator; Receive the generated item z from G θ ; 19 Sample a postive item i and a negative item j; 20 Train the BPR model with sampled items; 21 Update D φ based on Eq. 10 and keep G θ fixed; 22 Update G θ based on Eq.…”
mentioning
confidence: 99%