2022
DOI: 10.21468/scipostphys.13.4.087
|View full text |Cite
|
Sign up to set email alerts
|

Ephemeral Learning - Augmenting Triggers with Online-Trained Normalizing Flows

Abstract: The large data rates at the LHC require an online trigger system to select relevant collisions. Rather than compressing individual events, we propose to compress an entire data set at once. We use a normalizing flow as a deep generative model to learn the probability density of the data online. The events are then represented by the generative neural network and can be inspected offline for anomalies or used for other analysis purposes. We demonstrate our new approach for a toy model and a correlation-enhance… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 59 publications
0
2
0
Order By: Relevance
“…Finally, there are many simulation-related questions in fundamental physics, where AImethods allow us to make significant progress. Examples going beyond immediate applications to event generation include symbolic regression [131], sample and data compression [58,132], detection of symmetries [133][134][135][136], and many other fascinating new ideas and concepts.…”
Section: Discussionmentioning
confidence: 99%
“…Finally, there are many simulation-related questions in fundamental physics, where AImethods allow us to make significant progress. Examples going beyond immediate applications to event generation include symbolic regression [131], sample and data compression [58,132], detection of symmetries [133][134][135][136], and many other fascinating new ideas and concepts.…”
Section: Discussionmentioning
confidence: 99%
“…It implies that a potentially expensive integrand f (x) has to be evaluated for every event used to train the network, which makes it inefficient. One way to alleviate this problem is to buffer already generated samples and use them for a limited number of training passes [18].…”
Section: Neural Importance Samplingmentioning
confidence: 99%
“…(1) using the trained weights defined in Eq. (18) we first define normalized channel-wise probability distributions as…”
Section: A Buffered Losses and Trainingmentioning
confidence: 99%
See 1 more Smart Citation
“…Generative networks open new possibilities to enhance the efficiency of simulations [1,6]. They are able to learn underlying distributions with high precision [7,8,9,10,11,12,13,14] and can therefore provide more efficient phase space mapping [15,16,17,18,19,20], amplify [21,22] and compress data [23], serve as surrogate models in phenomenological studies and provide fast detector simulations [24,25,26]. Finally generative networks enable the inversion of the simulation chain [27,28,29].…”
Section: Introductionmentioning
confidence: 99%