Refine
Has Fulltext
- no (3)
Document Type
- Article (3)
Language
- English (3)
Is part of the Bibliography
- yes (3)
Keywords
- Open Science (1)
- adaptation (1)
- flood risk (1)
- global hydrological model (1)
- large-scale flood hazard models (1)
- models (1)
- reproducibility (1)
- sensitivity analysis (1)
- software (1)
- uncertainty (1)
Institute
- Institut für Umweltwissenschaften und Geographie (3) (remove)
In-depth understanding of the potential implications of climate change is required to guide decision- and policy-makers when developing adaptation strategies and designing infrastructure suitable for future conditions. Impact models that translate potential future climate conditions into variables of interest are needed to create the causal connection between a changing climate and its impact for different sectors. Recent surveys suggest that the primary strategy for validating such models (and hence for justifying their use) heavily relies on assessing the accuracy of model simulations by comparing them against historical observations. We argue that such a comparison is necessary and valuable, but not sufficient to achieve a comprehensive evaluation of climate change impact models. We believe that a complementary, largely observation-independent, step of model evaluation is needed to ensure more transparency of model behavior and greater robustness of scenario-based analyses. This step should address the following four questions: (1) Do modeled dominant process controls match our system perception? (2) Is my model's sensitivity to changing forcing as expected? (3) Do modeled decision levers show adequate influence? (4) Can we attribute uncertainty sources throughout the projection horizon? We believe that global sensitivity analysis, with its ability to investigate a model's response to joint variations of multiple inputs in a structured way, offers a coherent approach to address all four questions comprehensively. Such additional model evaluation would strengthen stakeholder confidence in model projections and, therefore, into the adaptation strategies derived with the help of impact models. This article is categorized under: Climate Models and Modeling > Knowledge Generation with Models Assessing Impacts of Climate Change > Evaluating Future Impacts of Climate Change
The growing worldwide impact of flood events has motivated the development and application of global flood hazard models (GFHMs). These models have become useful tools for flood risk assessment and management, especially in regions where little local hazard information is available. One of the key uncertainties associated with GFHMs is the estimation of extreme flood magnitudes to generate flood hazard maps. In this study, the 1-in-100 year flood (Q100) magnitude was estimated using flow outputs from four global hydrological models (GHMs) and two global flood frequency analysis datasets for 1350 gauges across the conterminous US. The annual maximum flows of the observed and modelled timeseries of streamflow were bootstrapped to evaluate the sensitivity of the underlying data to extrapolation. Results show that there are clear spatial patterns of bias associated with each method. GHMs show a general tendency to overpredict Western US gauges and underpredict Eastern US gauges. The GloFAS and HYPE models underpredict Q100 by more than 25% in 68% and 52% of gauges, respectively. The PCR-GLOBWB and CaMa-Flood models overestimate Q100 by more than 25% at 60% and 65% of gauges in West and Central US, respectively. The global frequency analysis datasets have spatial variabilities that differ from the GHMs. We found that river basin area and topographic elevation explain some of the spatial variability in predictive performance found in this study. However, there is no single model or method that performs best everywhere, and therefore we recommend a weighted ensemble of predictions of extreme flood magnitudes should be used for large-scale flood hazard assessment.