2019
DOI: 10.48550/arxiv.1905.10715
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Graph Attention Auto-Encoders

Abstract: Auto-encoders have emerged as a successful framework for unsupervised learning. However, conventional auto-encoders are incapable of utilizing explicit relations in structured data. To take advantage of relations in graph-structured data, several graph autoencoders have recently been proposed, but they neglect to reconstruct either the graph structure or node attributes. In this paper, we present the graph attention auto-encoder (GATE), a neural network architecture for unsupervised representation learning on … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 13 publications
(18 citation statements)
references
References 22 publications
0
16
0
Order By: Relevance
“…This issue is partially offset by sparse storage of the adjacency matrix, in general, and largely ameliorated by data that is on the same discretization. In future work we will investigate low-rank approximations to the adjacency matrix (Kanada et al, 2018;Lebedev et al, 2014;Richard et al, 2012;Savas and Dhillon, 2011;Tai et al, 2015), dimensionality reduction techniques (Belkin and Niyogi, 2003;He and Niyogi, 2004), and the use of graph autoencoders (Hasanzadeh et al, 2019;Kipf and Welling, 2016b;Liao et al, 2016;Salehi and Davulcu, 2019) to reduce the mesh-based graphs in-line. We are also pursuing the larger topic of processing images with multiresolution filters (Zhang et al, 2018), e.g., spanning the pixel to the cluster level.…”
Section: Discussionmentioning
confidence: 99%
“…This issue is partially offset by sparse storage of the adjacency matrix, in general, and largely ameliorated by data that is on the same discretization. In future work we will investigate low-rank approximations to the adjacency matrix (Kanada et al, 2018;Lebedev et al, 2014;Richard et al, 2012;Savas and Dhillon, 2011;Tai et al, 2015), dimensionality reduction techniques (Belkin and Niyogi, 2003;He and Niyogi, 2004), and the use of graph autoencoders (Hasanzadeh et al, 2019;Kipf and Welling, 2016b;Liao et al, 2016;Salehi and Davulcu, 2019) to reduce the mesh-based graphs in-line. We are also pursuing the larger topic of processing images with multiresolution filters (Zhang et al, 2018), e.g., spanning the pixel to the cluster level.…”
Section: Discussionmentioning
confidence: 99%
“…This issue is partially offset by sparse storage of the adjacency matrix, in general, and largely ameliorated by data that is on the same discretization, as in this work. In future work we will investigate low-rank approximations to the adjacency matrix [80][81][82][83][84], dimensionality reduction techniques [85,86], and the use of graph auto-encoders [87][88][89][90] to reduce the mesh-based graphs in-line. We are also pursuing the larger topic of processing image with multi-resolution filters [91], e.g.…”
Section: Discussionmentioning
confidence: 99%
“…We apply two known techniques to boost the autoencoder efficiency (Salehi & Davulcu, 2019). Since indexing of vertices is arbitrary, we proceed as follows: The vertices are sorted by their degree in decreasing order.…”
Section: Order Of Vertices and Adjacency Matrix Patchesmentioning
confidence: 99%
“…Attention (Vaswani et al, 2017) was introduced to graph VAE by Salehi & Davulcu (2019) as a crucial component of both the encoder and the decoder. Khan & Kleinsteuber (2021) proposed a graph VAE aiming to maximize the similarity between the embeddings of neighboring and more distant vertices while minimizing the redundancy between the components of these embeddings.…”
Section: Related Workmentioning
confidence: 99%