2021
DOI: 10.48550/arxiv.2103.05872
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Sampling methods for efficient training of graph convolutional networks: A survey

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 64 publications
0
4
0
Order By: Relevance
“…This way, each vertex has the same number of edges in the subgraphs, which reduces the computation complexity in GNN training and improves the regularity of the message aggregation for the subgraphs. It has been shown that the sampling-based methods can achieve accuracy competitive with the training of the full graph [14], [15], [16].…”
Section: Sampling-based Trainingmentioning
confidence: 99%
See 1 more Smart Citation
“…This way, each vertex has the same number of edges in the subgraphs, which reduces the computation complexity in GNN training and improves the regularity of the message aggregation for the subgraphs. It has been shown that the sampling-based methods can achieve accuracy competitive with the training of the full graph [14], [15], [16].…”
Section: Sampling-based Trainingmentioning
confidence: 99%
“…Sampling-based training. By using mini-batches, sampling-based training can scale to large graphs and has shown accuracy competitive with the full graph-based training [14], [15], [16]. GraphSage [11] first introduces the fixed-number vertex sampling and proposes the general message aggregation methods in GNN, such as sum, max pool, average, and LSTM.…”
Section: Related Workmentioning
confidence: 99%
“…After sampling, instead of using all neighbors as in the whole-graph training, sample-based training constructs a vertex's feature by only aggregating the features of the sampled set of neighbors. Existing sampling approaches largely fall into four categories: node-wise sampling, layer-wise sampling, subgraph-based sampling, and heterogeneous sampling [25]. They differ in the granularity of the sampling operation in one training minibatch.…”
Section: Sampling Algorithmsmentioning
confidence: 99%
“…Whole-graph training introduces inherent coordination and communication overheads that are hard to overcome as the system scales. Scaling sample-based training, on the other hand, requires (a) sampling algorithms that can form mini-batches without incurring into the "neighbor explosion" problem [14,25] and (b) scalable systems to execute these sampling algorithms efficiently. We review recent research that addresses these requirements.…”
Section: Introductionmentioning
confidence: 99%