2023 IEEE International Symposium on High-Performance Computer Architecture (HPCA) 2023
DOI: 10.1109/hpca56546.2023.10071015
|View full text |Cite
|
Sign up to set email alerts
|

FlowGNN: A Dataflow Architecture for Real-Time Workload-Agnostic Graph Neural Network Inference

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 38 publications
0
3
0
Order By: Relevance
“…First, we manually translate the sPHENIX model into synthesizable C code and feed it into the HLS tool, Vitis HLS [6]. Then, we perform hardware optimizations of the model in HLS following the FlowGNN architecture [7], which is the state-of-the-art GNN architecture on FPGA.…”
Section: Generation Of the Gnn Ip Corementioning
confidence: 99%
See 1 more Smart Citation
“…First, we manually translate the sPHENIX model into synthesizable C code and feed it into the HLS tool, Vitis HLS [6]. Then, we perform hardware optimizations of the model in HLS following the FlowGNN architecture [7], which is the state-of-the-art GNN architecture on FPGA.…”
Section: Generation Of the Gnn Ip Corementioning
confidence: 99%
“…The current TrackGNN model in sPHENIX we are using has one GNN layer, which includes 4 multi-layer perceptron (MLP) layers for both node and edge embedding with a dimension of 8. The proposed architecture follows the message-passing framework in FlowGNN [7]: the node embeddings are processed first, followed by an adapter to orchestrate the node information to the correct edge processing units for edge embedding computation and message aggregation. We also use quantization to reduce the data precision and to reduce the memory and computation requirements.…”
Section: Generation Of the Gnn Ip Corementioning
confidence: 99%
“…FlowGNN [120] is proposed to support generic GNN models for real-time inference applications. By introducing explicit message passing and multi-level parallelism, the authors provide a comprehensive solution for GNN acceleration without sacrificing adaptability.…”
Section: Framework For Fpga-based Acceleratorsmentioning
confidence: 99%