2021 IEEE/ACM International Conference on Computer Aided Design (ICCAD) 2021
DOI: 10.1109/iccad51958.2021.9643511
|View full text |Cite
|
Sign up to set email alerts
|

DARe: DropLayer-Aware Manycore ReRAM architecture for Training Graph Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(2 citation statements)
references
References 20 publications
0
2
0
Order By: Relevance
“…After processing, the graph (A sp ) is highly sparse in general, where task-irrelevant edges are removed to reduce subsequent computation and redundant memory access cost in GNN. Moreover, graph sparsification algorithms can be skillfully leveraged to reduce the communication latency during training, thus accelerating GNN training in the hardware [Arka et al, 2021]. Next, we discuss these methods according to their schemes (i.e., heuristic or learnable).…”
Section: Graph-level Improvmentsmentioning
confidence: 99%
“…After processing, the graph (A sp ) is highly sparse in general, where task-irrelevant edges are removed to reduce subsequent computation and redundant memory access cost in GNN. Moreover, graph sparsification algorithms can be skillfully leveraged to reduce the communication latency during training, thus accelerating GNN training in the hardware [Arka et al, 2021]. Next, we discuss these methods according to their schemes (i.e., heuristic or learnable).…”
Section: Graph-level Improvmentsmentioning
confidence: 99%
“…DARe. Different from the others that focus on accelerating the GCN inference phase, DARe [188] focuses on the GCN training phase. In the GCN training phase, DropEdge and Dropout (referred to as DropLayer) operations are implemented to regularize and improve accuracy.…”
Section: Pim-based Graph Learning Acceleratorsmentioning
confidence: 99%