2019
DOI: 10.48550/arxiv.1905.11577
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Towards Interpretable Sparse Graph Representation Learning with Laplacian Pooling

Abstract: Recent work in graph neural networks (GNNs) has lead to improvements in molecular activity and property prediction tasks. However, GNNs lack interpretability as they fail to capture the relative importance of various molecular substructures due to the absence of efficient intermediate pooling steps for sparse graphs. To address this issue, we propose LaPool (Laplacian Pooling), a novel, data-driven, and interpretable graph pooling method that takes into account the node features and graph structure to improve … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(18 citation statements)
references
References 26 publications
0
18
0
Order By: Relevance
“…5 The detailed architectures are described in Table II. We remark that any other hierarchical graph pooling techniques [19], [29], which are not included in our experiments, are also compatible with our readout layer; this means that their final classification performance can be further improved with the help of SSRead.…”
Section: A Experimental Settings 1) Datasetsmentioning
confidence: 88%
“…5 The detailed architectures are described in Table II. We remark that any other hierarchical graph pooling techniques [19], [29], which are not included in our experiments, are also compatible with our readout layer; this means that their final classification performance can be further improved with the help of SSRead.…”
Section: A Experimental Settings 1) Datasetsmentioning
confidence: 88%
“…The literature about learning on point clouds also introduced pooling techniques to generalize the typical pooling layers for grids, with most approaches based on voxelization techniques [48,45,43,31]. Many pooling operators have also been proposed based on graph spectral theory [7,36,23], or different clustering, sparsification, and decomposition techniques [35,3,42,53].…”
Section: Pooling In Graph Neural Networkmentioning
confidence: 99%
“…Consider a graph space defined on compact node and edge attribute sets X, E, and let K(G) represent the number of nodes of G = POOL(G), where K(G) ≤ K for all G and for some finite K ∈ N. By representing the output of the selection function as a matrix S ∈ R N ×K , we can then interpret SEL as permutation-equivariant node embedding operation x i → S i,: , from the space of node attributes to the space of supernodes assignments R K where we assumed, without loss of generality, that S i,k = 0 for all k > K(G) (this is necessary to ensure that any number of nodes K(G) can be computed by Table 1: Pooling methods in the SRC framework. GNN indicates a stack of one or more messagepassing layers, MLP is a multi-layer perceptron, L is the normalized graph Laplacian, β is a regularization vector (see [42]), D is the degree matrix, u max is the eigenvector of the Laplacian associated with the largest eigenvalue, i is a vector of indices, A i,i selects the rows and columns of A according to i.…”
Section: Select Reduce Connectmentioning
confidence: 99%
See 2 more Smart Citations