2020
DOI: 10.1140/epjc/s10052-020-8230-1
|View full text |Cite
|
Sign up to set email alerts
|

The DNNLikelihood: enhancing likelihood distribution with Deep Learning

Abstract: We introduce the DNNLikelihood, a novel framework to easily encode, through deep neural networks (DNN), the full experimental information contained in complicated likelihood functions (LFs). We show how to efficiently parametrise the LF, treated as a multivariate function of parameters of interest and nuisance parameters with high dimensionality, as an interpolating function in the form of a DNN predictor. We do not use any Gaussian approximation or dimensionality reduction, such as marginalisation or profilin… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
22
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 23 publications
(22 citation statements)
references
References 44 publications
0
22
0
Order By: Relevance
“…The models are made by learning numbers based on experimental measurements (e.g. likelihoods, posteriors, confidence levels for exclusion) from training data given the parameters of the physical model and experimental nuisance parameters [31]. Such training data can come from experiments or the recasting tools discussed below.…”
Section: Comparison Of Reinterpretation Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…The models are made by learning numbers based on experimental measurements (e.g. likelihoods, posteriors, confidence levels for exclusion) from training data given the parameters of the physical model and experimental nuisance parameters [31]. Such training data can come from experiments or the recasting tools discussed below.…”
Section: Comparison Of Reinterpretation Methodsmentioning
confidence: 99%
“…Such an approach has been proposed in Ref. [31], using a parameterisation that can encode complicated likelihoods with minimal loss of accuracy, in a lightweight, standard, and framework-independent format (e.g. ONNX) suitable for a wide range of reinterpretation applications.…”
Section: Full Likelihoodsmentioning
confidence: 99%
See 1 more Smart Citation
“…[21] and regress on the event information I = − log 10 (weight) instead. Similar approaches have been studied in detail in the literature [36].…”
Section: Fitting the Mem With Dnnmentioning
confidence: 98%
“…[45, 67-69, 72, 73, 111-113] for decorrelation techniques, Ref. [24,[28][29][30][31]33,[41][42][43]66,100,[114][115][116][117][118][119] for inference, Ref. [3,4,40,60,109, for tagging, and various other applications in Ref.…”
Section: Illustrative Modelmentioning
confidence: 99%