2021
DOI: 10.48550/arxiv.2112.08217
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Probabilistic Forecasting with Generative Networks via Scoring Rule Minimization

Abstract: Probabilistic forecasting consists of stating a probability distribution for a future outcome based on past observations. In meteorology, ensembles of physics-based numerical models are run to get such distribution. Usually, performance is evaluated with scoring rules, functions of the forecast distribution and the observed outcome. With some scoring rules, calibration and sharpness of the forecast can be assessed at the same time.In deep learning, generative neural networks parametrize distributions on high-d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
8
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(9 citation statements)
references
References 23 publications
1
8
0
Order By: Relevance
“…In simulation studies, and especially on high-dimensional tasks, we found the Scoring Rule approach generally performed better and was substantially cheaper to train. These findings corroborate similar ones reported in Pacchiardi et al [2022] in the setting of probabilistic forecasting, making Scoring Rules minimization an appealing method to train generative networks, particularly when uncertainty quantification in the approximate distribution is critical.…”
Section: Discussionsupporting
confidence: 87%
See 4 more Smart Citations
“…In simulation studies, and especially on high-dimensional tasks, we found the Scoring Rule approach generally performed better and was substantially cheaper to train. These findings corroborate similar ones reported in Pacchiardi et al [2022] in the setting of probabilistic forecasting, making Scoring Rules minimization an appealing method to train generative networks, particularly when uncertainty quantification in the approximate distribution is critical.…”
Section: Discussionsupporting
confidence: 87%
“…We discuss here how to use Scoring Rules to define an adversarial-free training objective for generative networks, focusing on the specific case of a generative network parametrizing an approximate posterior. In Pacchiardi et al [2022], more details on SR-training and its application to probabilistic forecasting can be found. Other works employing SR training, albeit not for the LFI framework, are Bouchacourt et al [2016], Gritsenko et al [2020], Harakeh and Waslander [2021].…”
Section: Posterior Inference Via Scoring Rules Minimizationmentioning
confidence: 99%
See 3 more Smart Citations