2021
DOI: 10.3389/fdata.2020.598927
|View full text |Cite
|
Sign up to set email alerts
|

Distance-Weighted Graph Neural Networks on FPGAs for Real-Time Particle Reconstruction in High Energy Physics

Abstract: Graph neural networks have been shown to achieve excellent performance for several crucial tasks in particle physics, such as charged particle tracking, jet tagging, and clustering. An important domain for the application of these networks is the FGPA-based first layer of real-time data filtering at the CERN Large Hadron Collider, which has strict latency and resource constraints. We discuss how to design distance-weighted graph networks that can be executed with a latency of less than one μs on an FPGA. To do… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
39
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

4
4

Authors

Journals

citations
Cited by 55 publications
(39 citation statements)
references
References 33 publications
0
39
0
Order By: Relevance
“…Many commonly used NN layers are supported: Dense; Convolution; BatchNormalization; and several Activation layers. In addition, domain specific layers can be easily added, one example being compressed distance-weighted graph networks [42].…”
Section: Motivationmentioning
confidence: 99%
“…Many commonly used NN layers are supported: Dense; Convolution; BatchNormalization; and several Activation layers. In addition, domain specific layers can be easily added, one example being compressed distance-weighted graph networks [42].…”
Section: Motivationmentioning
confidence: 99%
“…The algorithm is simple to port to computing architectures that support common ML frameworks like TensorFlow without significant investment. This includes GPUs and potentially even field-programmable gate arrays (FPGAs) or ML-specific processors such as the GraphCore intelligence processing units (IPUs) [67] through specialized ML compilers [68][69][70]. These coprocessing accelerators can be integrated into existing CPU-based experimental software frameworks as a scalable service that grows to meet the transient demand [71][72][73].…”
Section: Resultsmentioning
confidence: 99%
“…Work has also been done to accelerate the inference of deep neural networks with heterogeneous resources beyond GPUs, like field-programmable gate arrays (FPGAs) [49][50][51][52][53][54][55][56][57]. This work extends to GNN architectures [29,58]. Specifically, in Ref.…”
Section: Inference Timingmentioning
confidence: 99%