2020
DOI: 10.48550/arxiv.2006.16904
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Graph Clustering with Graph Neural Networks

Abstract: Graph Neural Networks (GNNs) have achieved state-of-the-art results on many graph analysis tasks such as node classification and link prediction. However, important unsupervised problems on graphs, such as graph clustering, have proved more resistant to advances in GNNs. In this paper, we study unsupervised training of GNN pooling in terms of their clustering capabilities. We start by drawing a connection between graph clustering and graph pooling: intuitively, a good graph clustering is what one would expect … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
42
0
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 30 publications
(52 citation statements)
references
References 19 publications
(29 reference statements)
0
42
0
1
Order By: Relevance
“…From early spectral convolutions [15], [30] to distributed implementations of (equivalent) shift-invariant polynomial graph filters [19], [26], [31], GCNs integrate information from both the graph topology and nodal attributes to learn representations of network data. Indeed, the GRL paradigm is to learn low-dimensional embeddings of individual vertices, edges, or the graph itself [23], [32]- [34], which can then be used in e.g., (semi-supervised) node classification [15], link prediction [35], graph clustering [36], [37], and graph clas-sification [38]. Recently, GRL ideas have permeated to neuroimaging data analysis for behavioral state classification [39], to study the relationship between SC and FC [13], [40], and to extract representations for subject classification [41]- [43].…”
Section: A Related Workmentioning
confidence: 99%
“…From early spectral convolutions [15], [30] to distributed implementations of (equivalent) shift-invariant polynomial graph filters [19], [26], [31], GCNs integrate information from both the graph topology and nodal attributes to learn representations of network data. Indeed, the GRL paradigm is to learn low-dimensional embeddings of individual vertices, edges, or the graph itself [23], [32]- [34], which can then be used in e.g., (semi-supervised) node classification [15], link prediction [35], graph clustering [36], [37], and graph clas-sification [38]. Recently, GRL ideas have permeated to neuroimaging data analysis for behavioral state classification [39], to study the relationship between SC and FC [13], [40], and to extract representations for subject classification [41]- [43].…”
Section: A Related Workmentioning
confidence: 99%
“…(1) The most basic approach is the unsupervised learning of the network embedding, followed by a clustering algorithm, e.g., K-Means or GMM, on the embedding. (2) Some architectures learn the network embedding with the primary objective to find good clustering assignments [47,48]. However, they usually do not perform well on downstream tasks such as node classification as the embeddings are primarily aimed to find clusters.…”
Section: Node Clusteringmentioning
confidence: 99%
“…Some GNNs directly optimize for non-overlapping community detection with community-wise loss functions. Single objective approaches propose variations of traditional cohesiveness scores, such as min-cut [3] and modularity [35]. Yet, single-objective methods inherit the limitations of the score they aim to optimize, providing community memberships that are subject to the loss objective's definition of community.…”
Section: Graph Neural Networkmentioning
confidence: 99%
“…CommDGI's combined objective overcomes the limitations of the single score methods, but requires extensive parameter tuning for proper results. Although models like DMoN [35] return soft community assignments through a softmax output layer, both single and combined objective methods explicitly penalize overlaps among communities.…”
Section: Graph Neural Networkmentioning
confidence: 99%
See 1 more Smart Citation