2011
DOI: 10.1029/2010wr009153
|View full text |Cite
|
Sign up to set email alerts
|

Benchmarking quantitative precipitation estimation by conceptual rainfall‐runoff modeling

Abstract: [1] Hydrologic modelers often need to know which method of quantitative precipitation estimation (QPE) is best suited for a particular catchment. Traditionally, QPE methods are verified and benchmarked against independent rain gauge observations. However, the lack of spatial representativeness limits the value of such a procedure. Alternatively, one could drive a hydrological model with different QPE products and choose the one which best reproduces observed runoff. Unfortunately, the calibration of conceptual… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
54
0

Year Published

2012
2012
2018
2018

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 52 publications
(57 citation statements)
references
References 41 publications
3
54
0
Order By: Relevance
“…These results are in line with findings of Bárdossy and Das (2008) regarding network density or of Heistermann and Kneis (2011) with respect to different rainfall data sets and spatial interpolation methods; -the hydrological model works quite well for general conditions, i.e., reproducing the hydrograph on the whole, when it is calibrated on extreme conditions, i.e., using the extreme value distribution of peak flows, than vice versa. This confirms that unusual events or small data sets might be sufficient for model calibration (Singh and Bárdossy, 2012;Seibert and Beven, 2009);…”
Section: Discussionsupporting
confidence: 77%
See 1 more Smart Citation
“…These results are in line with findings of Bárdossy and Das (2008) regarding network density or of Heistermann and Kneis (2011) with respect to different rainfall data sets and spatial interpolation methods; -the hydrological model works quite well for general conditions, i.e., reproducing the hydrograph on the whole, when it is calibrated on extreme conditions, i.e., using the extreme value distribution of peak flows, than vice versa. This confirms that unusual events or small data sets might be sufficient for model calibration (Singh and Bárdossy, 2012;Seibert and Beven, 2009);…”
Section: Discussionsupporting
confidence: 77%
“…For instance, Bárdossy and Das (2008) show that using different rain gauge networks for calibration and validation of a conceptual hydrologic model leads to significantly poorer performance compared to the case when unique networks are employed. Similar problems will occur if precipitation data from different sources are used for calibration and validation, such as rainfall information from point observations and weather radar (Heistermann and Kneis, 2011). In addition, if a hydrological model is calibrated using observed precipitation and runoff time series of high temporal resolution, e.g., hourly data, which are often available only for very short time periods, the outcome might not be optimal for the simulation of floods with large return periods of 50, 100 or more years.…”
Section: U Haberlandt and I Radtke: Hydrological Model Calibration mentioning
confidence: 99%
“…A discussion of the advantages and limitations of this approach is provided by Heistermann and Kneis (2011). We apply this strategy as one option to identify strengths and weaknesses of the radar-based rainfall estimates.…”
Section: Qpe Verification Using Rain Gaugesmentioning
confidence: 99%
“…However, different rainfall products have different error structures which imply different trade-offs. Heistermann and Kneis (2011) presented an MC approach to benchmark different precipitation products by using hydrological modelling. In our case, however, we are not so much interested in the "absolute" quality of the rainfall product.…”
Section: Qpe Verification Using a Hydrological Modelmentioning
confidence: 99%
“…Since systematic errors in rainfall input may partly be compensated by the choice of the model's parameters (see, e.g. Heistermann and Kneis, 2011), the evaluation was done with and without (re-)calibration of the hydrological model to the individual rainfall data sets.…”
Section: Evaluation Proceduresmentioning
confidence: 99%