2022
DOI: 10.48550/arxiv.2203.00199
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Equivariant and Stable Positional Encoding for More Powerful Graph Neural Networks

Abstract: Graph neural networks (GNN) have shown great advantages in many graph-based learning tasks but often fail to predict accurately for a task based on sets of nodes such as link/motif prediction and so on. Many works have recently proposed to address this problem by using random node features or node distance features. However, they suffer from either slow convergence, inaccurate prediction or high complexity. In this work, we revisit GNNs that allow using positional features of nodes given by positional encoding… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 17 publications
0
3
0
Order By: Relevance
“…To this end, the generic message passing scheme has been shown to be universal over Turing computable functions [1], although this does not cover the case of non-computable functions or density over certain sets of non-computable functions. More recent breakthroughs have shown that the expressive power is not necessarily a property of architectures alone, because enriching the node feature space with positional encodings [21,21,22], like random node representations, can make MPNNs more powerful [23]. A result that we use in later experiments is that MPNNs enriched with non-trainable node representations can express any non-attributed graph functions while retaining equivariance in probability [24].…”
Section: Universal Approximation Of Agfsmentioning
confidence: 99%

Universal Local Attractors on Graphs

Krasanakis,
Papadopoulos,
Kompatsiaris
2024
Preprint
“…To this end, the generic message passing scheme has been shown to be universal over Turing computable functions [1], although this does not cover the case of non-computable functions or density over certain sets of non-computable functions. More recent breakthroughs have shown that the expressive power is not necessarily a property of architectures alone, because enriching the node feature space with positional encodings [21,21,22], like random node representations, can make MPNNs more powerful [23]. A result that we use in later experiments is that MPNNs enriched with non-trainable node representations can express any non-attributed graph functions while retaining equivariance in probability [24].…”
Section: Universal Approximation Of Agfsmentioning
confidence: 99%

Universal Local Attractors on Graphs

Krasanakis,
Papadopoulos,
Kompatsiaris
2024
Preprint
“…To this end, the generic message-passing scheme has been shown to be universal over Turing computable functions [1], although this does not cover the case of non-computable functions or density over certain sets of non-computable functions. More recent breakthroughs have shown that the expressive power is not necessarily a property of architectures alone, because enriching the node feature space with positional encodings [21,22], like random node representations, can make MPNNs more expressive [23]. A result that we use in later experiments is that MPNNs enriched with non-trainable node representations can express any non-attributed graph functions while retaining equivariance in probability [24].…”
Section: Universal Approximation Of Agfsmentioning
confidence: 99%
“…Although some traditional methods utilize the structural information of the peptide sequences [34,35], it is difficult to combine these deep learning algorithms properly. In recent research in the field of natural language processing (NLP) [36], positional encoding (PE) is used to encode the relative position of words in a sentence, allowing deep models to retain position information among words [37,38]. Supplementing the structure information can effectively help the deep network to achieve better performance, especially those networks that are not sensitive to position information, such as the Transformer [39].…”
Section: Introductionmentioning
confidence: 99%