2021
DOI: 10.21468/scipostphyscore.4.2.013
|View full text |Cite
|
Sign up to set email alerts
|

Testing new physics models with global comparisons to collider measurements: the Contur toolkit

Abstract: Measurements at particle collider experiments, even if primarily aimed at understanding Standard Model processes, can have a high degree of model independence, and implicitly contain information about potential contributions from physics beyond the Standard Model. The CONTUR package allows users to benefit from the hundreds of measurements preserved in the RIVET library to test new models against the bank of LHC measurements to date. This method has proven to be very effective in several recent publications fr… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 26 publications
(25 citation statements)
references
References 138 publications
0
24
0
Order By: Relevance
“…The current CONTUR methodology [1,3] obtains exclusion results for all the desired points in a new-physics model's parameter space by generating collider events using the Herwig event generator [5] with a Universal FeynRules Object [6], used to describe the BSM model. Generated signal events are passed through the analysis logic of the preserved RIVET routines [7], and the results distribution are compared to the observed results for LHC analyses.…”
Section: Motivationmentioning
confidence: 99%
See 3 more Smart Citations
“…The current CONTUR methodology [1,3] obtains exclusion results for all the desired points in a new-physics model's parameter space by generating collider events using the Herwig event generator [5] with a Universal FeynRules Object [6], used to describe the BSM model. Generated signal events are passed through the analysis logic of the preserved RIVET routines [7], and the results distribution are compared to the observed results for LHC analyses.…”
Section: Motivationmentioning
confidence: 99%
“…There are several metrics which can be used to quantify this, as discussed in Section 2.4. Other performance metrics are also calculated for the testing and training pools, and determine the stopping conditions which dictate whether the ORACLE should continue sampling or provide a final prediction 1 . A summary of this procedure is provided in Figure 2.…”
Section: Training Algorithmmentioning
confidence: 99%
See 2 more Smart Citations
“…A non-exhaustive list of these include MadAnalysis 5 [7][8][9][10][11][12], which can be used with a detector simulation or transfer functions and contains around 40 run I and II analyses; CheckMATE [13,14], which contains over 50 run I and II analyses, and was recently extended to support long-lived particle searches in addition to prompt particle searches; ColliderBit [15], which includes its own detector modelling in the form fast 4-vector smearing (Buckfast), and has a database of around 40 analyses, from runs I and II ; and Rivet [17,18], which has currently "only" 30 BSM analyses, but boasts a huge library of SM analyses, upwards of 800. In fact, it has been shown that SM inclusive measurements also set strong constraints on new physics, and the tool Contur has been developed for this purpose [19][20][21][22][23]. A comparison of the performance of these tools for a specific CMS search for supersymmetric (SUSY) particles [24] was performed [25] and strong agreement was seen across all included tools.…”
Section: Reinterpretation Of Lhc Searchesmentioning
confidence: 99%