Proceedings of the 28th ACM International Conference on Information and Knowledge Management 2019
DOI: 10.1145/3357384.3357880
|View full text |Cite
|
Sign up to set email alerts
|

Graph Convolutional Networks with Motif-based Attention

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
50
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 84 publications
(50 citation statements)
references
References 17 publications
0
50
0
Order By: Relevance
“…The concept of motif was first introduced in 2002 [10] , which represents the frequently repeated patterns in complex networks and is the building block of complex networks. Some work [11][12][13] proves that motif plays an important role in understanding and capturing higher-order structure information of the biological networks, social networks, academic networks, and so on. Capturing the motif structure and its interaction can improve the quality of network embedding.…”
Section: Deep Graph Convolutional Neural Networkmentioning
confidence: 99%
“…The concept of motif was first introduced in 2002 [10] , which represents the frequently repeated patterns in complex networks and is the building block of complex networks. Some work [11][12][13] proves that motif plays an important role in understanding and capturing higher-order structure information of the biological networks, social networks, academic networks, and so on. Capturing the motif structure and its interaction can improve the quality of network embedding.…”
Section: Deep Graph Convolutional Neural Networkmentioning
confidence: 99%
“…Standard MPNNs have been shown to be at most as powerful as the WL test in distinguishing non-isomorphic graphs [10], [11]. Some works aggregate other types of structures instead of neighbors such as small subgraphs [25] or paths [26]. Considerable efforts have also been devoted to building deeper MPNNs [27], [28] and more powerful models [11], [12].…”
Section: Related Workmentioning
confidence: 99%
“…Graph Attention Networks (GATs) have been successfully applied to various tasks over graphs (Velickovic et al, 2018;Lee et al, 2018b), such as graph classification (Wu et al, 2019b;Lee et al, 2018a), link prediction (Abu-El-Haija et al, 2018), and node classification (Lee et al, 2019;Zhang et al, 2020a). GATs learn from the underlying graph structure by making use of localized attention mechanism (Wu et al, 2019a;Xu et al, 2019;Vashishth et al, 2020b), where the hidden representation of each node is computed by recursively aggregating and attending over its corresponding local neighbors' features, and the weighting coefficients are calculated inductively with self-attention strategy (Thekumparampil et al, 2018;Qian et al, 2018;Zhang et al, 2018).…”
Section: Introductionmentioning
confidence: 99%