2020 28th Signal Processing and Communications Applications Conference (SIU) 2020
DOI: 10.1109/siu49456.2020.9302271
|View full text |Cite
|
Sign up to set email alerts
|

Graph Embedding For Link Prediction Using Residual Variational Graph Autoencoders

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 12 publications
0
4
0
Order By: Relevance
“…For example, ADN 24 is a graph autoencoder structure and achieves information diffusion through alternating spatial and temporal self-attention. Due to the power of GAE 25 , it is widely used in different research directions, such as link prediction [26][27][28][29][30] , graph clustering 31,32 , hyperspectral anomaly detection 33 . While the traditional GCN takes node features and adjacency matrix as input and node embedding as output, GAEs compresses the node embeddings of all nodes in a graph to a single graph embedding to obtain information about the context.…”
Section: Graph Convolution Networkmentioning
confidence: 99%
“…For example, ADN 24 is a graph autoencoder structure and achieves information diffusion through alternating spatial and temporal self-attention. Due to the power of GAE 25 , it is widely used in different research directions, such as link prediction [26][27][28][29][30] , graph clustering 31,32 , hyperspectral anomaly detection 33 . While the traditional GCN takes node features and adjacency matrix as input and node embedding as output, GAEs compresses the node embeddings of all nodes in a graph to a single graph embedding to obtain information about the context.…”
Section: Graph Convolution Networkmentioning
confidence: 99%
“…Graph autoencoders (GAEs) are a kind of unsupervised learning method, which mean that they map nodes to a potential vector space by encoding process, reconstructing graph information with the information in the vector to generate a graph similar to the original one (decoding) 13,20 . Due to the power of GAE 11 , it is widely used in different research directions, such as link prediction [21][22][23][24][25] , graph clustering 26,27 , hyperspectral anomaly detection 28 . While the traditional GCN takes node features and adjacency matrix as input and node embedding as output, GAEs compresses the node embeddings of all nodes in a graph to a single graph embedding to obtain information about the context.…”
Section: Graph Convolution Networkmentioning
confidence: 99%
“…Phuc, Yamada & Kashima (2020) embeds several graphs with similar structural properties to boost link prediction accuracy. Keser et al (2020) employs skip-connections in VGAE.…”
Section: Link Predictionmentioning
confidence: 99%