50th International Conference on Parallel Processing 2021
DOI: 10.1145/3472456.3472462
|View full text |Cite
|
Sign up to set email alerts
|

Exploiting in-Hub Temporal Locality in SpMV-based Graph Processing

Abstract: The skewed degree distribution of real-world graphs is the main source of poor locality in traversing all edges of the graph, known as Sparse Matrix-Vector (SpMV) Multiplication. Conventional graph traversal methods, such as push and pull, traverse all vertices in the same manner, and we show applying a uniform traversal direction for all edges leads to sub-optimal memory locality, hence poor efficiency. This paper argues that different vertices in power-law graphs have different locality characteristics and t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
1
1

Relationship

4
4

Authors

Journals

citations
Cited by 12 publications
(7 citation statements)
references
References 43 publications
(51 reference statements)
0
4
0
Order By: Relevance
“…SpMV is a widely used kernel in graph analytics, e.g., Graph/Recurrent Neural Networks [17,40]. Generalized SpMV iteratively computes 𝑥 𝑡 +1 = 𝐴𝑥 𝑡 = ⨁︁ 𝑛 𝑖=1 𝐴 𝑖 ⊗ 𝑥 𝑡 , where 𝐴 is the adjacency matrix (transposed) of a graph 𝐺 and 𝐴 𝑖 is the 𝑖-th row of 𝐴; 𝑥 𝑡 is a vector of values, each for a vertex of 𝐺; 𝑡 is the number of iterations that have been completed; ⊕ and ⊗ are algorithm-specific semiring operators.…”
Section: Sparse Matrix-vector Multiplication (Spmv)mentioning
confidence: 99%
“…SpMV is a widely used kernel in graph analytics, e.g., Graph/Recurrent Neural Networks [17,40]. Generalized SpMV iteratively computes 𝑥 𝑡 +1 = 𝐴𝑥 𝑡 = ⨁︁ 𝑛 𝑖=1 𝐴 𝑖 ⊗ 𝑥 𝑡 , where 𝐴 is the adjacency matrix (transposed) of a graph 𝐺 and 𝐴 𝑖 is the 𝑖-th row of 𝐴; 𝑥 𝑡 is a vector of values, each for a vertex of 𝐺; 𝑡 is the number of iterations that have been completed; ⊕ and ⊗ are algorithm-specific semiring operators.…”
Section: Sparse Matrix-vector Multiplication (Spmv)mentioning
confidence: 99%
“…iHTL [27] is a structure-aware SpMV with optimized locality in processing power-law graphs. iHTL extracts dense sub-graphs containing incoming edges to in-hubs and processes them in the push direction; while processing other edges in the pull direction.…”
Section: Depth Of Components' Treesmentioning
confidence: 99%
“…(2) Some graph optimizations are dependent on the architecture of machines and it is the tension between data size and the architecture capacities that forms the challenge context and presents the opportunity to design novel data structures, algorithms and processing models. E.g., the design of localityoptimizing algorithms [9], [10], [11], [12] depends on the fact that CPU's cache contains a small portion of the data. By the advent of CPUs with cache sizes of multiple GigaBytes, the locality optimizing algorithms play no role for small datasets as accesses to a large portion of data is covered by cache.…”
Section: A Why Do We Need Updated and Real-world Graphs?mentioning
confidence: 99%
“…(2) A wide range of real-world datasets facilitates crossdomain evaluation of the new contributions and provides broad and correct assessment across a variety of use cases (i.e., better pruning of the falsifiable insights [24]). Also, we will have the opportunity to improve several graph algorithms and optimizations that exploit the structure of graphs [2], [10], [11], [25], [26].…”
Section: B Why Do We Need Different Types Of Real-world Graphs?mentioning
confidence: 99%