2022
DOI: 10.48550/arxiv.2209.14065
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

LL-GNN: Low Latency Graph Neural Networks on FPGAs for Particle Detectors

Abstract: This work proposes a novel reconfigurable architecture for low latency Graph Neural Network (GNN) design specifically for particle detectors. Accelerating GNNs for particle detectors is challenging since it requires sub-microsecond latency to deploy the networks for online event selection in the Level-1 triggers at the CERN Large Hadron Collider experiments. This paper proposes a custom code transformation with strength reduction for the matrix multiplication operations in the interaction-network based GNNs wi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 40 publications
(96 reference statements)
0
3
0
Order By: Relevance
“…LL-GNN [32] aims to minimize latency in processing GNNs on FPGA for real-time applications in high-energy physics, especially in collider triggering systems where ultra-low latency is crucial for timely event selection. Que et al propose a design combining quantization and FPGAs that offers low latency when processing small graphs and can be used in scenarios requiring sub-microsecond latency and high throughput, such as particle identification in fundamental physics experiments.…”
Section: Fpga-based Accelerator Approaches With Quantizationmentioning
confidence: 99%
See 1 more Smart Citation
“…LL-GNN [32] aims to minimize latency in processing GNNs on FPGA for real-time applications in high-energy physics, especially in collider triggering systems where ultra-low latency is crucial for timely event selection. Que et al propose a design combining quantization and FPGAs that offers low latency when processing small graphs and can be used in scenarios requiring sub-microsecond latency and high throughput, such as particle identification in fundamental physics experiments.…”
Section: Fpga-based Accelerator Approaches With Quantizationmentioning
confidence: 99%
“…Low power consumption can increase the energy efficiency of embedded devices, making it advantageous for mobile applications. FPGAs offer flexibility, which opens up innovative possibilities for researchers and developers [32]. The research community aims to utilize FPGA-based accelerators to address issues such as load imbalance, memory requirements, and computing power [30].…”
Section: Introductionmentioning
confidence: 99%
“…G RAPH Neural Networks (GNNs) have become the state-of-the-art models for representation learning on graphs, facilitating many applications such as social recommendation system [1], [2], molecular property prediction [3], [4] and traffic prediction [5], etc. Initially, GNNs were computed on a GPU [6], [7], [8] or an FPGA platform [9], [10], [11]; however, as the size of the graph increases, computing GNNs on a single GPU or an FPGA platform becomes time-consuming. Thus, many works [12], [13], [14], [15], [16] have proposed to accelerate GNN training on a multi-CPU or a multi-GPU platform as it provides more memory bandwidth and computation resources.…”
Section: Introductionmentioning
confidence: 99%