2022
DOI: 10.21468/scipostphys.12.1.045
|View full text |Cite
|
Sign up to set email alerts
|

Deep Set Auto Encoders for Anomaly Detection in Particle Physics

Abstract: There is an increased interest in model agnostic search strategies for physics beyond the standard model at the Large Hadron Collider. We introduce a Deep Set Variational Autoencoder and present results on the Dark Machines Anomaly Score Challenge. We find that the method attains the best anomaly detection ability when there is no decoding step for the network, and the anomaly score is based solely on the representation within the encoded latent space. This method was one of the top-performing models in the Da… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
3

Relationship

2
7

Authors

Journals

citations
Cited by 28 publications
(13 citation statements)
references
References 43 publications
0
13
0
Order By: Relevance
“…Finally, it would be interesting to see if one can extend this technique to cases where there is no known high-level variable basis (like the Energy Flow Polynomials) and to see to what extent decision ordering transfers to different signals. For instance, the methods which performed best on the Dark Machines anomaly score challenge [25,51,54] used variational autoencoder structures which only aimed to make a Gaussian latent space and did not try to reconstruct events. It would be very interesting to see what physics these methods are using, but there is no obvious basis of observables to use.…”
Section: Discussionmentioning
confidence: 99%
“…Finally, it would be interesting to see if one can extend this technique to cases where there is no known high-level variable basis (like the Energy Flow Polynomials) and to see to what extent decision ordering transfers to different signals. For instance, the methods which performed best on the Dark Machines anomaly score challenge [25,51,54] used variational autoencoder structures which only aimed to make a Gaussian latent space and did not try to reconstruct events. It would be very interesting to see what physics these methods are using, but there is no obvious basis of observables to use.…”
Section: Discussionmentioning
confidence: 99%
“…Recently, the use of machine learning techniques has been advocated as a mean to reduce the model dependence (Weisser and Williams, 2016 ; Collins et al, 2018 , 2019 , 2021 ; Blance et al, 2019 ; Cerri et al, 2019 ; D'Agnolo and Wulzer, 2019 ; De Simone and Jacques, 2019 ; Heimel et al, 2019 ; Andreassen et al, 2020 ; Cheng et al, 2020 ; Dillon et al, 2020 ; Farina et al, 2020 ; Hajer et al, 2020 ; Khosa and Sanz, 2020 ; Nachman, 2020 ; Nachman and Shih, 2020 ; Park et al, 2020 ; Amram and Suarez, 2021 ; Bortolato et al, 2021 ; D'Agnolo et al, 2021 ; Finke et al, 2021 ; Gonski et al, 2021 ; Hallin et al, 2021 ; Ostdiek, 2021 ). In this context, the particle-physics community engaged in two data challenges: the LHC Olympics 2020 (Kasieczka et al, 2021 ) and the DarkMachines challenge (Aarrestad et al, 2021 ), where different approaches were explored to attempt to detect an unknown signal of new physics hidden in simulated data.…”
Section: Introductionmentioning
confidence: 99%
“…This study is an update of our contribution to the DarkMachine challenge (Aarrestad et al, 2021 ), which benefits from the lessons learned by the DarkMachines challenge. Taking inspiration from solutions presented by other groups in the challenge (e.g., Caron et al, 2021 ; Ostdiek, 2021 ), we evaluate the impact of some of their findings on our specific setup. In some cases (but not always), these solutions translate in an improved performance, quantified using the same metrics presented in Aarrestad et al ( 2021 ).…”
Section: Introductionmentioning
confidence: 99%
“…Recently, there have been many proposals for automating AD methods with machine learning [62][63][64][65][66][67][68][69][70][71][72][73][74][75][76][77][78][79][80][81] (see Ref. [80][81][82][83] for overviews of the field).…”
Section: Introductionmentioning
confidence: 99%