The search result changed since you submitted your search request. Documents might be displayed in a different sort order.
  • search hit 28 of 972
Back to Result List

Benchmarking quantitative precipitation estimation by conceptual rainfall-runoff modeling

  • Hydrologic modelers often need to know which method of quantitative precipitation estimation (QPE) is best suited for a particular catchment. Traditionally, QPE methods are verified and benchmarked against independent rain gauge observations. However, the lack of spatial representativeness limits the value of such a procedure. Alternatively, one could drive a hydrological model with different QPE products and choose the one which best reproduces observed runoff. Unfortunately, the calibration of conceptual model parameters might conceal actual differences between the QPEs. To avoid such effects, we abandoned the idea of determining optimum parameter sets for all QPE being compared. Instead, we carry out a large number of runoff simulations, confronting each QPE with a common set of random parameters. By evaluating the goodness-of-fit of all simulations, we obtain information on whether the quality of competing QPE methods is significantly different. This knowledge is inferred exactly at the scale of interest-the catchment scale. WeHydrologic modelers often need to know which method of quantitative precipitation estimation (QPE) is best suited for a particular catchment. Traditionally, QPE methods are verified and benchmarked against independent rain gauge observations. However, the lack of spatial representativeness limits the value of such a procedure. Alternatively, one could drive a hydrological model with different QPE products and choose the one which best reproduces observed runoff. Unfortunately, the calibration of conceptual model parameters might conceal actual differences between the QPEs. To avoid such effects, we abandoned the idea of determining optimum parameter sets for all QPE being compared. Instead, we carry out a large number of runoff simulations, confronting each QPE with a common set of random parameters. By evaluating the goodness-of-fit of all simulations, we obtain information on whether the quality of competing QPE methods is significantly different. This knowledge is inferred exactly at the scale of interest-the catchment scale. We use synthetic data to investigate the ability of this procedure to distinguish a truly superior QPE from an inferior one. We find that the procedure is prone to failure in the case of linear systems. However, we show evidence that in realistic (nonlinear) settings, the method can provide useful results even in the presence of moderate errors in model structure and streamflow observations. In a real-world case study on a small mountainous catchment, we demonstrate the ability of the verification procedure to reveal additional insights as compared to a conventional cross validation approach.show moreshow less

Export metadata

Additional Services

Search Google Scholar Statistics
Metadaten
Author details:Maik HeistermannORCiDGND, David Kneis
DOI:https://doi.org/10.1029/2010WR009153
ISSN:0043-1397
Title of parent work (English):Water resources research
Publisher:American Geophysical Union
Place of publishing:Washington
Publication type:Article
Language:English
Year of first publication:2011
Publication year:2011
Release date:2017/03/26
Volume:47
Issue:23
Number of pages:23
Funding institution:German Federal Ministry of Education and Research [SKZ 0330713D]
Organizational units:Mathematisch-Naturwissenschaftliche Fakultät / Institut für Geowissenschaften
Peer review:Referiert
Institution name at the time of the publication:Mathematisch-Naturwissenschaftliche Fakultät / Institut für Erd- und Umweltwissenschaften
Accept ✔
This website uses technically necessary session cookies. By continuing to use the website, you agree to this. You can find our privacy policy here.