2022
DOI: 10.48550/arxiv.2201.08475
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

GenGNN: A Generic FPGA Framework for Graph Neural Network Acceleration

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…[41] proposes a resource-efficient heterogeneous pipeline architecture for GNNs on HBM-enabled FPGAs. [42] proposes GenGNN, a generic GNN acceleration framework using High-Level Synthesis (HLS), aiming to deliver ultra-fast GNN inference and support a diverse set of GNN models. Results show their designs achieve millisecond level latency.…”
Section: Related Workmentioning
confidence: 99%
“…[41] proposes a resource-efficient heterogeneous pipeline architecture for GNNs on HBM-enabled FPGAs. [42] proposes GenGNN, a generic GNN acceleration framework using High-Level Synthesis (HLS), aiming to deliver ultra-fast GNN inference and support a diverse set of GNN models. Results show their designs achieve millisecond level latency.…”
Section: Related Workmentioning
confidence: 99%