The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2021
DOI: 10.1016/j.physletb.2020.136029
|View full text |Cite
|
Sign up to set email alerts
|

Deep learning ensemble for real-time gravitational wave detection of spinning binary black hole mergers

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
25
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 45 publications
(25 citation statements)
references
References 17 publications
0
25
0
Order By: Relevance
“…It is worth comparing this figure to other recent studies in the literature. For instance, in [65], it was reported that an ensemble of 2 AI models reported 1 misclassification for every 2.7 days of searched data, and more basic AI architectures reported one misclassification for every 200 seconds of searched advanced LIGO data [28]. For completeness, it is worth mentioning that the results we present in Figure 6 differ from those we computed with traditional TensorFlow models in less than 0.01% [66].…”
Section: Discussionmentioning
confidence: 74%
See 1 more Smart Citation
“…It is worth comparing this figure to other recent studies in the literature. For instance, in [65], it was reported that an ensemble of 2 AI models reported 1 misclassification for every 2.7 days of searched data, and more basic AI architectures reported one misclassification for every 200 seconds of searched advanced LIGO data [28]. For completeness, it is worth mentioning that the results we present in Figure 6 differ from those we computed with traditional TensorFlow models in less than 0.01% [66].…”
Section: Discussionmentioning
confidence: 74%
“…In this article, we build upon our recent work developing AI frameworks for production scale gravitational wave detection [65,66], and introduce an approach that consists of optimizing AI models for accelerated inference, leveraging NVIDIA TensorRT [67]. We describe how we deployed our TensorRT AI ensemble in the ThetaGPU supercomputer at Argonne Leadership Computing Facility, and developed the required software to optimally distribute inference using up to 20 nodes, which are equivalent to 160 NVIDIA A100 Tensor Core GPUs.…”
Section: Introductionmentioning
confidence: 99%
“…It is worth comparing this figure to other recent studies in the literature. For instance, in Wei et al ( 2021b ), it was reported that an ensemble of 2 AI models reported 1 misclassification for every 2.7 days of searched data, and more basic AI architectures reported one misclassification for every 200 s of searched advanced LIGO data (George and Huerta, 2018a , b ). For completeness, it is worth mentioning that the results we present in Figure 6 differ from those we computed with traditional models in less than 0.01% (Huerta et al, 2021 ).…”
Section: Resultsmentioning
confidence: 99%
“…When using a time-shifted advanced LIGO dataset that spans 5 years worth of data, we found that our AI ensemble reports 1 misclassification per month of searched data. This should be contrasted with the first generation of AI models that reported 1 misclassification for every 200 s of searched data (George and Huerta, 2018a , b ), and the other AI ensembles that reported 1 misclassifications for every 2.7 days of searched data (Wei et al, 2021b ).…”
Section: Discussionmentioning
confidence: 96%
See 1 more Smart Citation