2020
DOI: 10.48550/arxiv.2005.03675
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Machine Learning on Graphs: A Model and Comprehensive Taxonomy

Abstract: There has been a surge of recent interest in learning representations for graph-structured data. Graph representation learning methods have generally fallen into three main categories, based on the availability of labeled data. The first, network embedding (such as shallow graph embedding or graph auto-encoders), focuses on learning unsupervised representations of relational structure. The second, graph regularized neural networks, leverages graphs to augment neural network losses with a regularization objecti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
57
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 41 publications
(57 citation statements)
references
References 56 publications
0
57
0
Order By: Relevance
“…GCNs are versatile signal and information processing architectures [15]- [17], which comprise stacked layers of graph (convolutional) filters followed by point-wise nonlinearities; see [17], [23], [26]- [29] for recent surveys and the references therein. From early spectral convolutions [15], [30] to distributed implementations of (equivalent) shift-invariant polynomial graph filters [19], [26], [31], GCNs integrate information from both the graph topology and nodal attributes to learn representations of network data.…”
Section: A Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…GCNs are versatile signal and information processing architectures [15]- [17], which comprise stacked layers of graph (convolutional) filters followed by point-wise nonlinearities; see [17], [23], [26]- [29] for recent surveys and the references therein. From early spectral convolutions [15], [30] to distributed implementations of (equivalent) shift-invariant polynomial graph filters [19], [26], [31], GCNs integrate information from both the graph topology and nodal attributes to learn representations of network data.…”
Section: A Related Workmentioning
confidence: 99%
“…From early spectral convolutions [15], [30] to distributed implementations of (equivalent) shift-invariant polynomial graph filters [19], [26], [31], GCNs integrate information from both the graph topology and nodal attributes to learn representations of network data. Indeed, the GRL paradigm is to learn low-dimensional embeddings of individual vertices, edges, or the graph itself [23], [32]- [34], which can then be used in e.g., (semi-supervised) node classification [15], link prediction [35], graph clustering [36], [37], and graph clas-sification [38]. Recently, GRL ideas have permeated to neuroimaging data analysis for behavioral state classification [39], to study the relationship between SC and FC [13], [40], and to extract representations for subject classification [41]- [43].…”
Section: A Related Workmentioning
confidence: 99%
See 3 more Smart Citations