Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining 2023
DOI: 10.1145/3580305.3599388
|View full text |Cite
|
Sign up to set email alerts
|

Improving Expressivity of GNNs with Subgraph-specific Factor Embedded Normalization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 24 publications
0
1
0
Order By: Relevance
“…Therefore, they devise a novel normalization method named GraphNorm that automatically adjusts the step of shift operation to preserve the graph structure information. Besides Cai's work, there are some other graph-specific normalization methods aiming to deal with problems in graph representation learning such as over-smoothing and varying graph size [202,203,204,205,206,207]. Very Recently, Eliasof et al [208] introduce GRANOLA that adaptively perform normalization on node feature according to the input graph via attaching random feature that we mention in Section 3 and then passing through an additional GNN, which not only enhances the performance of GNNs across various graph benchmarks and architectures, but also increases the expressive power of the GNN model.…”
Section: Training Tricksmentioning
confidence: 99%
“…Therefore, they devise a novel normalization method named GraphNorm that automatically adjusts the step of shift operation to preserve the graph structure information. Besides Cai's work, there are some other graph-specific normalization methods aiming to deal with problems in graph representation learning such as over-smoothing and varying graph size [202,203,204,205,206,207]. Very Recently, Eliasof et al [208] introduce GRANOLA that adaptively perform normalization on node feature according to the input graph via attaching random feature that we mention in Section 3 and then passing through an additional GNN, which not only enhances the performance of GNNs across various graph benchmarks and architectures, but also increases the expressive power of the GNN model.…”
Section: Training Tricksmentioning
confidence: 99%