2023
DOI: 10.21203/rs.3.rs-2638974/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Simple Hierarchical PageRank Graph Neural Networks

Abstract: Graph neural networks (GNNs) have many variants for graph representation learning. Several works introduce PageRank into GNNs to improve its neighborhood aggregation capabilities. However, these methods leverage the general PageRank to perform complex neighborhood aggregation to obtain the final feature representation, which leads to high computational cost and oversmoothing. In this paper, we propose simple hierarchical PageRank graph neural networks (SHP-GNNs), which first utilize the simple PageRank to aggr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(10 citation statements)
references
References 33 publications
0
8
0
Order By: Relevance
“…In this section, we introduce an advanced method for universal graph prompt tuning that is applied to the features of input nodes at the subgraph-level, referred to as Subgraph-level Universal Prompt Tuning (SUPT). Inspired by the concept of pixel-level prompts for images [1,4,23,27,34,44,52] and node-level prompts for graphs [7], SUPT similarly utilizes additional prompt features into the input space of graphs.…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…In this section, we introduce an advanced method for universal graph prompt tuning that is applied to the features of input nodes at the subgraph-level, referred to as Subgraph-level Universal Prompt Tuning (SUPT). Inspired by the concept of pixel-level prompts for images [1,4,23,27,34,44,52] and node-level prompts for graphs [7], SUPT similarly utilizes additional prompt features into the input space of graphs.…”
Section: Methodsmentioning
confidence: 99%
“…A key aspect of our work is to theoretically demonstrate that SUPT maintains universality under these conditions. Consequently, this implies that SUPT can also achieve the theoretical performance upper bound for any prompting function, as shown in the GPF paper [7]. In the context of a specified pre-training task 𝑡 from a set of tasks T, and given an input graph G characterized by its nodes 𝑋 and adjacency matrix 𝐴, we introduce a prompting function Ψ 𝑡 (•).…”
Section: Subgraph-level Universal Prompt Tuningmentioning
confidence: 99%
See 3 more Smart Citations