Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery &Amp; Data Mining 2018
DOI: 10.1145/3219819.3220052
|View full text |Cite
|
Sign up to set email alerts
|

Deep Variational Network Embedding in Wasserstein Space

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
77
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 128 publications
(84 citation statements)
references
References 20 publications
0
77
0
Order By: Relevance
“…Both methods show that capturing embedding uncertainty learns more meaningful representations in their evaluation tasks. Another recent study [17] proposes to learn node embeddings as Gaussian distributions using the Wasserstein metric rather than KL divergence, as the former preserves edge transitivity.…”
Section: Related Workmentioning
confidence: 99%
“…Both methods show that capturing embedding uncertainty learns more meaningful representations in their evaluation tasks. Another recent study [17] proposes to learn node embeddings as Gaussian distributions using the Wasserstein metric rather than KL divergence, as the former preserves edge transitivity.…”
Section: Related Workmentioning
confidence: 99%
“…It can benefit a variety of tasks including recommendation. Many effective network embedding algorithms have been proposed [10,15,17,19,25]. We briefly review some of these methods here.…”
Section: Related Work 21 Network Embeddingmentioning
confidence: 99%
“…SDNE [19] exploited the first-order proximity and secondorder proximity in a joint approach to capture both the local and global network structure. DVNE [25] learned a Gaussian distribution in the Wasserstein space for each node to preserve more properties such as transitivity and uncertainty.…”
Section: Related Work 21 Network Embeddingmentioning
confidence: 99%
“…Their development would offer huge potential in understanding the relation between individuals' characters, their social relationships, the content they are engaged with, and the larger communities they belong to. This would not only provide us with deeper insight about social behaviour, it would give us predictive tools for the emergence of network structure, individual interests and behavioural patterns.In this paper we propose a contribution to solving this problem by developing a joint featurenetwork embedding built on multitask Graph Convolutional Networks [24,25,26,27] and Variational Autoencoders (GCN-VAE) [28,29,30,31,32], which we call the Attributed Network to Vector method (AN2VEC). In our model, different dimensions of the generated embeddings can be dedicated to encoding feature information, network structure, or shared feature-network information separately.…”
mentioning
confidence: 99%