2020 IEEE International Conference on Data Mining (ICDM) 2020
DOI: 10.1109/icdm50108.2020.00056
|View full text |Cite
|
Sign up to set email alerts
|

Beyond Localized Graph Neural Networks: An Attributed Motif Regularization Framework

Abstract: We present InfoMotif, a new semi-supervised, motifregularized, learning framework over graphs. We overcome two key limitations of message passing in popular graph neural networks (GNNs): localization (a k-layer GNN cannot utilize features outside the k-hop neighborhood of the labeled training nodes) and over-smoothed (structurally indistinguishable) representations. We propose the concept of attributed structural roles of nodes based on their occurrence in different network motifs, independent of network proxi… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(4 citation statements)
references
References 32 publications
0
4
0
Order By: Relevance
“…Node properties such as degree, proximity, and attributes, which are seen as local structure information, are often used as the ground truth to fully exploit the unlabeled data [17]. For example, InfoMotif [31] models attribute correlations in motif structures with mutual information maximization to regularize graph neural networks. Meanwhile, global structure information like node pair distance is also harnessed to facilitate representation learning [35].…”
Section: Self-supervised Learningmentioning
confidence: 99%
“…Node properties such as degree, proximity, and attributes, which are seen as local structure information, are often used as the ground truth to fully exploit the unlabeled data [17]. For example, InfoMotif [31] models attribute correlations in motif structures with mutual information maximization to regularize graph neural networks. Meanwhile, global structure information like node pair distance is also harnessed to facilitate representation learning [35].…”
Section: Self-supervised Learningmentioning
confidence: 99%
“…There are several other proposed variants of GNNs; however, they are all confined to only capturing low-order graph structures around every node (Li et al, 2018). GNN models have recently incorporated graphlets (Tu et al, 2018;Feng & Chen, 2020), motifs (Zhao et al, 2018;Sankar et al, 2020;Subramonian, 2021), and anonymous walks (Long et al, 2020;Jin et al, 2020) to leverage higher-order graph structures. gl-DCNN (Tu et al, 2018) concatenates node graphlet information and node features for input to the diffusion-convolutional neural networks.…”
Section: Network Embeddingmentioning
confidence: 99%
“…On the other hand, capturing high-order information based on graph neural networks have led to a renewed interest, of which research on motifs and hypergraphs is particularly important. (Sankar et al, 2020) learns statistical dependencies between structurally similar nodes with co-varying attributes and independent of network proximity, it maximizes motif-based mutual information, and dynamically prioritizes the significance of different motifs to learn network embedding. (Chen et al, 2021) proposed redundancy minimization among motifs which compares the motifs with each other and distills the features unique to each motif.…”
Section: Related Workmentioning
confidence: 99%