2022
DOI: 10.48550/arxiv.2205.07266
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Discovering and Explaining the Representation Bottleneck of Graph Neural Networks from Multi-order Interactions

Abstract: Most graph neural networks (GNNs) rely on the message passing paradigm to propagate node features and build interactions. Recent works point out that different graph learning tasks require different ranges of interactions between nodes. To investigate the underlying mechanism, we explore the capacity of GNNs to capture pairwise interactions between nodes under contexts with different complexities, especially for their graph-level and node-level applications in scientific domains like biochemistry and physics. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 68 publications
0
1
0
Order By: Relevance
“…More importantly, with advanced training setup and structure modernization, ConvNets can readily deliver comparable or even superior performance than well-tuned ViTs without increasing computational budgets [29,74,87,122]. Nevertheless, there remains a representation bottleneck for existing approaches [26,46,85,124]: naive implementation of self-attention or large kernels hampers the modeling of discriminative contextual information and global interactions, leading to the cognition gap between DNNs and human visual system. As in feature integration theory [110], human brains not only extract local features but simultaneously aggregate these features for global perception, which is more compact and efficient than DNNs [73,74].…”
Section: Introductionmentioning
confidence: 99%
“…More importantly, with advanced training setup and structure modernization, ConvNets can readily deliver comparable or even superior performance than well-tuned ViTs without increasing computational budgets [29,74,87,122]. Nevertheless, there remains a representation bottleneck for existing approaches [26,46,85,124]: naive implementation of self-attention or large kernels hampers the modeling of discriminative contextual information and global interactions, leading to the cognition gap between DNNs and human visual system. As in feature integration theory [110], human brains not only extract local features but simultaneously aggregate these features for global perception, which is more compact and efficient than DNNs [73,74].…”
Section: Introductionmentioning
confidence: 99%