2020
DOI: 10.1088/1748-0221/15/05/p05026
|View full text |Cite
|
Sign up to set email alerts
|

Fast inference of Boosted Decision Trees in FPGAs for particle physics

Abstract: We describe the implementation of Boosted Decision Trees in the hls4ml library, which allows the translation of a trained model into FPGA firmware through an automated conversion process. Thanks to its fully on-chip implementation, hls4ml performs inference of Boosted Decision Tree models with extremely low latency. With a typical latency less than 100 ns, this solution is suitable for FPGA-based real-time processing, such as in the Level-1 Trigger system of a collider experiment. These developments open up pr… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
40
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 58 publications
(41 citation statements)
references
References 16 publications
1
40
0
Order By: Relevance
“…Development of ML models deployable to FPGA-based L1T systems is helped by tools for automatic network-to-circuit conversion such as hls4ml. Using hls4ml, several solutions for HEP-specific tasks (e.g., jet tagging) have been provided (Duarte et al, 2018;Coelho et al, 2020;Di Guglielmo et al, 2020;Summers et al, 2020), exploiting models with simpler architectures than what is shown here. This tool has been applied extensively for tasks in the HL-LHC upgrade of the CMS L1T system, including an autoencoder for anomaly detection, and DNNs for muon energy regression and identification, tau lepton identification, and vector boson fusion event classification (CMS Collaboration, 2020).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Development of ML models deployable to FPGA-based L1T systems is helped by tools for automatic network-to-circuit conversion such as hls4ml. Using hls4ml, several solutions for HEP-specific tasks (e.g., jet tagging) have been provided (Duarte et al, 2018;Coelho et al, 2020;Di Guglielmo et al, 2020;Summers et al, 2020), exploiting models with simpler architectures than what is shown here. This tool has been applied extensively for tasks in the HL-LHC upgrade of the CMS L1T system, including an autoencoder for anomaly detection, and DNNs for muon energy regression and identification, tau lepton identification, and vector boson fusion event classification (CMS Collaboration, 2020).…”
Section: Related Workmentioning
confidence: 99%
“…To this end, its design prioritizes all-on-chip implementations of the most common network components. Its functionality has been demonstrated with dense neural networks (DNNs) ( Duarte et al, 2018 ), extended to also support BDTs ( Summers et al, 2020 ). Extensions to convolutional and recurrent neural networks are in development.…”
Section: Introductionmentioning
confidence: 99%
“…Work has also been done to accelerate the inference of deep neural networks with heterogeneous resources beyond GPUs, like field-programmable gate arrays (FPGAs) [49][50][51][52][53][54][55][56][57]. This work extends to GNN architectures [29,58].…”
Section: Inference Timingmentioning
confidence: 99%
“…We now discuss some triggers constructed using the above BDT classification and quote the signal efficiency and background rates for each of them. In the Phase-II upgrade, the L1 trigger hardware will have FPGAs which will be able to handle small scale machine learning (ML) applications, like BDT classification [70,71]. We choose three points from the ROC curves, corresponding to 98%, 90% and 70% background rejections.…”
Section: Triggers Based On the Bdt Training Using Variables From Tracmentioning
confidence: 99%