2022
DOI: 10.1007/978-3-030-99372-6_7
|View full text |Cite
|
Sign up to set email alerts
|

Optimizing Sparse Matrix Multiplications for Graph Neural Networks

Abstract: This is a repository copy of Optimizing Sparse Matrix Multiplications for Graph Neural Networks.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 41 publications
0
3
0
Order By: Relevance
“…New DNN workloads exhibit new characteristics. For example, inputs to the graph neural network are sparse matrices [40] scheduling [39], [41], [42]. These approaches are complementary to AIACC-Training.…”
Section: Discussionmentioning
confidence: 99%
“…New DNN workloads exhibit new characteristics. For example, inputs to the graph neural network are sparse matrices [40] scheduling [39], [41], [42]. These approaches are complementary to AIACC-Training.…”
Section: Discussionmentioning
confidence: 99%
“… Matrix view : Expressing GNN models from a coarse‐grained, global perspective, emphasizing operations involving sparse adjacency matrices and feature vectors. Both views are essential tools for studying GNNs and complement each other 15,78,80 (Figure 9B). …”
Section: Large‐scale Glmsmentioning
confidence: 99%
“…Sparse matrix-vector multiplication (SpMV) multiplies the adjacency matrix 𝐴 (transposed) of graph 𝐺 with a vector 𝑥 of values, one per vertex; it is used in Graph/Recurrent Neural Networks [40], Topic Search [25], and Belief Propagation [22]. In this paper, we consider MIP algorithms over a single thread, and study how to compute the answers to multiple sources in one go, in an interleaved manner.…”
Section: Preliminariesmentioning
confidence: 99%