2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS) 2020
DOI: 10.1109/ipdps47924.2020.00077
|View full text |Cite
|
Sign up to set email alerts
|

Spara: An Energy-Efficient ReRAM-Based Accelerator for Sparse Graph Analytics Applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
13
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 26 publications
(17 citation statements)
references
References 32 publications
0
13
0
1
Order By: Relevance
“…This is particularly a challenge for the real-world graphs that often have the skewed degree distribution. To address this problem, GraphSAR [102] is proposed based on the insights that the sparsity can be adjusted with graph reordering. Still considering the matrix view of a graph, if an update of a vertex is computed using all non-zeros in a column, it is possible to "move" the non-zeros close to each other so that the subgraphs covered by the GEs become denser, thereby reducing the computation waste.…”
Section: Graphsarmentioning
confidence: 99%
“…This is particularly a challenge for the real-world graphs that often have the skewed degree distribution. To address this problem, GraphSAR [102] is proposed based on the insights that the sparsity can be adjusted with graph reordering. Still considering the matrix view of a graph, if an update of a vertex is computed using all non-zeros in a column, it is possible to "move" the non-zeros close to each other so that the subgraphs covered by the GEs become denser, thereby reducing the computation waste.…”
Section: Graphsarmentioning
confidence: 99%
“…Hence, ReRAM-based accelerators for both DNN training and inference have been extensively studied [6] [7]. Moreover, ReRAM based graph accelerators have been shown to significantly outperform CPU-or GPU-based systems both in terms of execution time and energy [8] [14] [15]. However, these solutions focus mainly on accelerating the computation.…”
Section: A Reram-based Architecturesmentioning
confidence: 99%
“…It should be noted that even smaller ReRAM sizes can also be used for E-PEs [14]. In this work, we adopt (without any loss of generality) the 8x8 tile architecture for E-PE following recent trends [8] [15].…”
Section: A Role Of Heterogeneitymentioning
confidence: 99%
“…In this manner, MCA is able to perform the matrix multiplication operation in just one step. Resistive memory in its memristive crossbar array (MCA) configuration can significantly accelerate matrix multiply operations commonly found in neural network, graph, and image processing workloads [19,[82][83][84][85][86][87][88][89]. Figure 5a shows a mathematical abstraction of one such operation for a single-layer perceptron implementing binary classification, where x are the inputs to the perceptron, w are the weights to the perceptron, sgn is the signum function, and y is the output of the perceptron [90].…”
Section: Memory Typesmentioning
confidence: 99%
“…In contrast, DRAM requires multiple loads/stores and computation steps to perform the same operation. This advantage is exploited by management policies which map matrix vector multiplication to ReRAM crossbar arrays [19,[82][83][84][85][86][87][88][89].…”
Section: Energy Efficiencymentioning
confidence: 99%