2019
DOI: 10.5194/hess-2019-181
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A crash-testing framework for predictive uncertainty assessment when forecasting high flows in an extrapolation context

Abstract: An increasing number of flood forecasting services assess and communicate the uncertainty associated with their forecasts. While obtaining reliable forecasts is a key issue, it is a challenging task, especially when forecasting high flows in an extrapolation context, i.e. when the event magnitude is larger than what was observed before. In this study, we present a crash-testing framework that evaluates the quality of hydrological forecasts in an extrapolation context. The experiment setup is based on i) a larg… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 69 publications
0
4
0
Order By: Relevance
“…The MuTHRE‐FD model overcomes these problems. A common paradigm in forecasting is that reliability takes precedence over sharpness, because a prediction that is sharp but unreliable represents overconfidence (e.g., Berthet et al., 2020; Crochemore et al., 2016; Gneiting & Katzfuss, 2014). It follows that the large improvements in reliability of high flows obtained by the MuTHRE‐FD model are worth the sacrifice in sharpness.…”
Section: Discussionmentioning
confidence: 99%
“…The MuTHRE‐FD model overcomes these problems. A common paradigm in forecasting is that reliability takes precedence over sharpness, because a prediction that is sharp but unreliable represents overconfidence (e.g., Berthet et al., 2020; Crochemore et al., 2016; Gneiting & Katzfuss, 2014). It follows that the large improvements in reliability of high flows obtained by the MuTHRE‐FD model are worth the sacrifice in sharpness.…”
Section: Discussionmentioning
confidence: 99%
“…we lack standardized, open procedures for conducting comparative uncertainty estimation studies. Note that from the references above only Berthet et al (2020) focused on benchmarking uncertainty estimation strategies, and then only for assessing postprocessing approaches. We previously argued that data-based models provide a meaningful and general benchmark for testing hypotheses and models (Nearing and Gupta, 2015;Nearing et al, 2020b), and here we develop a set of data-based uncertainty estimation benchmarks built on a standard, publicly available, large-sample dataset that could be used as a baseline for future benchmarking studies.…”
mentioning
confidence: 99%
“…We struggled with finding suitable benchmarks for the DL uncertainty estimation approaches explored here. Ad hoc benchmarking and model intercomparison studies are common (e.g., Andréassian et al, 2009;Best et al, 2015;Kratzert et al, 2019b;Lane et al, 2019;Berthet et al, 2020;Nearing et al, 2018), and while the community has a (quickly growing) large-sample dataset for benchmarking hydrological models (Newman et al, 2017;Kratzert et al, 2019b), we lack standardized, open procedures for conducting comparative uncertainty estimation studies. Note that from the references above only Berthet et al (2020) focused on benchmarking uncertainty estimation strategies, and then only for assessing postprocessing approaches.…”
mentioning
confidence: 99%
“…Ad hoc benchmarking and model intercomparison studies are common (e.g., Andréassian et al, 2009;Best et al, 2015;Kratzert et al, 2019b;Lane et al, 2019;Berthet et al, 2020;Nearing et al, 2018), and while the community has a (quickly growing) large-sample dataset for benchmarking hydrological models (Newman et al, 2017;Kratzert et al, 2019b), we lack standardized, open procedures for conducting comparative uncertainty estimation studies. Note that from the references above only Berthet et al (2020) focused on benchmarking uncertainty estimation strategies, and then only for assessing postprocessing approaches. We previously argued that data-based models provide a meaningful and general benchmark for testing hypotheses and models (Nearing and Gupta, 2015;Nearing et al, 2020b), and here we develop a set of data-based uncertainty estimation benchmarks built on a standard, publicly available, large-sample dataset that could be used as a baseline for future benchmarking studies.…”
mentioning
confidence: 99%