Refine
Year of publication
Document Type
- Article (21)
- Postprint (2)
- Doctoral Thesis (1)
- Review (1)
Keywords
- Climate variability (2)
- Europe (2)
- adaptation (2)
- historical floods (2)
- trend analysis (2)
- vulnerability (2)
- Ahr (1)
- Ahr River (1)
- Asia (1)
- Catchment wetness (1)
Design flood estimation is an essential part of flood risk assessment. Commonly applied are flood frequency analyses and design storm approaches, while the derived flood frequency using continuous simulation has been getting more attention recently. In this study, a continuous hydrological modelling approach on an hourly time scale, driven by a multi-site weather generator in combination with a -nearest neighbour resampling procedure, based on the method of fragments, is applied. The derived 100-year flood estimates in 16 catchments in Vorarlberg (Austria) are compared to (a) the flood frequency analysis based on observed discharges, and (b) a design storm approach. Besides the peak flows, the corresponding runoff volumes are analysed. The spatial dependence structure of the synthetically generated flood peaks is validated against observations. It can be demonstrated that the continuous modelling approach can achieve plausible results and shows a large variability in runoff volume across the flood events.
Different upper tail indicators exist to characterize heavy tail phenomena, but no comparative study has been carried out so far. We evaluate the shape parameter (GEV), obesity index, Gini index and upper tail ratio (UTR) against a novel benchmark of tail heaviness - the surprise factor. Sensitivity analyses to sample size and changes in scale-to-location ratio are carried out in bootstrap experiments. The UTR replicates the surprise factor best but is most uncertain and only comparable between records of similar length. For samples with symmetric Lorenz curves, shape parameter, obesity and Gini indices provide consistent indications. For asymmetric Lorenz curves, however, the first two tend to overestimate, whereas Gini index tends to underestimate tail heaviness. We suggest the use of a combination of shape parameter, obesity and Gini index to characterize tail heaviness. These indicators should be supported with calculation of the Lorenz asymmetry coefficients and interpreted with caution.
Die Hochwasserkatastrophe im Juli 2021 in Westdeutschland erfordert eine kritische Diskussion über die Abschätzung der Hochwassergefährdung, Aktualisierung von Hochwassergefahrenkarten und Kommunikation von extremen Hochwasserszenarien. In der vorliegenden Arbeit wurde die Extremwertstatistik für die jährlichen maximalen Spitzenabflüsse am Pegel Altenahr im Ahrtal mit und ohne Berücksichtigung historischer Hochwasser berechnet und verglichen. Die Schätzung der Wiederkehrperiode für das aktuelle Hochwasser mittels Generalisierter Extremwertverteilung (GEV) unter Berücksichtigung historischer Hochwasser schwankt zwischen etwa 2.600 und über 58.700 Jahren (90%-Konfidenzintervall) mit einem Median bei etwa 8.600 Jahren, wogegen die Schätzung, die nur auf der systematisch gemessenen Abflusszeitreihe von 74 Jahren basiert, theoretisch eine Wiederkehrperiode von über 100 Millionen Jahren ergeben würde. Die Berücksichtigung der historischen Hochwasser führt zu einer dramatischen Änderung der Hochwasserquan-
tile, die für eine Gefahrenkartierung zugrunde gelegt werden. Die Anpassung der GEV an die Zeitreihe mit historischen Hochwassern zeigt dennoch, dass das GEV-Modell möglicherweise die Grundgesamtheit der Hochwasser im Ahrtal nicht adäquat abbilden kann. Es könnte sich im vorliegenden Fall um eine gemischte Stichprobe handeln, in der die extremen Hochwasser im Vergleich zu kleineren Ereignissen durch besondere Prozesse hervorgerufen werden. Somit könnten die Wahrscheinlichkeiten von extremen Hochwassern deutlich größer sein, als aus dem GEV-Modell hervorgeht. Hier sollte in Zukunft die Anwendung einer prozessbasierten Mischverteilung
untersucht werden. Der Vergleich von amtlichen Gefahrenkarten zu Extremhochwassern (HQextrem) im Ahrtal mit den Überflutungsflächen vom Juli 2021
zeigt eine deutliche Diskrepanz in den betroffenen Gebieten und die Notwendigkeit, die Grundlagen zur Erstellung der Extremszenarien zu überdenken. Die hydrodynamisch-numerischen Simulationen von 1.000-jährlichen Hochwassern (HQ1000) unter Berücksichtigung historischer Ereignisse und des größten historischen Hochwassers 1804 können die Gefährdung des Juli-Hochwassers 2021 deutlich besser widerspiegeln, wenngleich auch diese beiden Szenarien die Überflutungsflächen unterschätzen. Besondere Effekte wie die Verklausung von Brücken und die geomorphologischen Änderungen im Flussschlauch führten zu noch größeren Überflutungs- flächen im Juli 2021, als die Simulationsergebnisse zeigten. Basierend auf dieser Analyse wird eine einheitliche Festlegung von HQextrem bei Hochwassergefahrenkartierungen in Deutschland vorgeschlagen, die sich an höheren Hochwasserquantilen im Bereich von HQ1000 orientiert. Zusätzlich sollen simulationsbasierte Rekonstruktionen von den größten verlässlich dokumentierten historischen Hochwassern und/oder synthetische Worst-Case-Szenarien in den Hochwassergefahrenkarten gesondert dargestellt werden. Damit wird ein wichtiger Beitrag geleistet, um die potenziell betroffene Bevölkerung und das Katastrophenmanagement vor Überraschungen durch sehr seltene und extreme Hochwasser in Zukunft besser zu schützen.
River reaches protected by dikes exhibit high damage potential due to strong value accumulation in the hinterland areas. While providing an efficient protection against low magnitude flood events, dikes may fail under the load of extreme water levels and long flood durations. Hazard and risk assessments for river reaches protected by dikes have not adequately considered the fluvial inundation processes up to now. Particularly, the processes of dike failures and their influence on the hinterland inundation and flood wave propagation lack comprehensive consideration. This study focuses on the development and application of a new modelling system which allows a comprehensive flood hazard assessment along diked river reaches under consideration of dike failures. The proposed Inundation Hazard Assessment Model (IHAM) represents a hybrid probabilistic-deterministic model. It comprises three models interactively coupled at runtime. These are: (1) 1D unsteady hydrodynamic model of river channel and floodplain flow between dikes, (2) probabilistic dike breach model which determines possible dike breach locations, breach widths and breach outflow discharges, and (3) 2D raster-based diffusion wave storage cell model of the hinterland areas behind the dikes. Due to the unsteady nature of the 1D and 2D coupled models, the dependence between hydraulic load at various locations along the reach is explicitly considered. The probabilistic dike breach model describes dike failures due to three failure mechanisms: overtopping, piping and slope instability caused by the seepage flow through the dike core (micro-instability). The 2D storage cell model driven by the breach outflow boundary conditions computes an extended spectrum of flood intensity indicators such as water depth, flow velocity, impulse, inundation duration and rate of water rise. IHAM is embedded in a Monte Carlo simulation in order to account for the natural variability of the flood generation processes reflected in the form of input hydrographs and for the randomness of dike failures given by breach locations, times and widths. The model was developed and tested on a ca. 91 km heavily diked river reach on the German part of the Elbe River between gauges Torgau and Vockerode. The reach is characterised by low slope and fairly flat extended hinterland areas. The scenario calculations for the developed synthetic input hydrographs for the main river and tributary were carried out for floods with return periods of T = 100, 200, 500, 1000 a. Based on the modelling results, probabilistic dike hazard maps could be generated that indicate the failure probability of each discretised dike section for every scenario magnitude. In the disaggregated display mode, the dike hazard maps indicate the failure probabilities for each considered breach mechanism. Besides the binary inundation patterns that indicate the probability of raster cells being inundated, IHAM generates probabilistic flood hazard maps. These maps display spatial patterns of the considered flood intensity indicators and their associated return periods. Finally, scenarios of polder deployment for the extreme floods with T = 200, 500, 1000 were simulated with IHAM. The developed IHAM simulation system represents a new scientific tool for studying fluvial inundation dynamics under extreme conditions incorporating effects of technical flood protection measures. With its major outputs in form of novel probabilistic inundation and dike hazard maps, the IHAM system has a high practical value for decision support in flood management.
Stochastic modeling of precipitation for estimation of hydrological extremes is an important element of flood risk assessment and management. The spatially consistent estimation of rainfall fields and their temporal variability remains challenging and is addressed by various stochastic weather generators.
In this study, two types of weather generators are evaluated against observed data and benchmarked regarding their ability to simulate spatio-temporal precipitation fields in the Rhine catchment. A multi-site station-based weather generator uses an auto-regressive model and estimates the spatial correlation structure between stations. Another weather generator is raster-based and uses the nearest-neighbor resampling technique for reshuffling daily patterns while preserving the correlation structure between the observations.
Both weather generators perform well and are comparable at the point (station) scale with regards to daily mean and 99.9th percentile precipitation as well as concerning wet/dry frequencies and transition probabilities. The areal extreme precipitation at the sub-basin scale is however overestimated in the station-based weather generator due to an overestimation of the correlation structure between individual stations. The auto-regressive model tends to generate larger rainfall fields in space for extreme precipitation than observed, particularly in summer. The weather generator based on nearest-neighbor resampling reproduces the observed daily and multiday (5, 10 and 20) extreme events in a similar magnitude. Improvements in performance regarding wet frequencies and transition probabilities are recommended for both models.
A wide variety of processes controls the time of occurrence, duration, extent, and severity of river floods. Classifying flood events by their causative processes may assist in enhancing the accuracy of local and regional flood frequency estimates and support the detection and interpretation of any changes in flood occurrence and magnitudes. This paper provides a critical review of existing causative classifications of instrumental and preinstrumental series of flood events, discusses their validity and applications, and identifies opportunities for moving toward more comprehensive approaches. So far no unified definition of causative mechanisms of flood events exists. Existing frameworks for classification of instrumental and preinstrumental series of flood events adopt different perspectives: hydroclimatic (large-scale circulation patterns and atmospheric state at the time of the event), hydrological (catchment scale precipitation patterns and antecedent catchment state), and hydrograph-based (indirectly considering generating mechanisms through their effects on hydrograph characteristics). All of these approaches intend to capture the flood generating mechanisms and are useful for characterizing the flood processes at various spatial and temporal scales. However, uncertainty analyses with respect to indicators, classification methods, and data to assess the robustness of the classification are rarely performed which limits the transferability across different geographic regions. It is argued that more rigorous testing is needed. There are opportunities for extending classification methods to include indicators of space-time dynamics of rainfall, antecedent wetness, and routing effects, which will make the classification schemes even more useful for understanding and estimating floods. This article is categorized under: Science of Water > Water Extremes Science of Water > Hydrological Processes Science of Water > Methods
Large-scale flood risk assessments are crucial for decision making, especially with respect to new flood defense schemes, adaptation planning and estimating insurance premiums. We apply the process-based Regional Flood Model (RFM) to simulate a 5000-year flood event catalog for all major catchments in Germany and derive risk curves based on the losses per economic sector. The RFM uses a continuous process simulation including a multisite, multivariate weather generator, a hydrological model considering heterogeneous catchment processes, a coupled 1D-2D hydrodynamic model considering dike overtopping and hinterland storage, spatially explicit sector-wise exposure data and empirical multi-variable loss models calibrated for Germany. For all components, uncertainties in the data and models are estimated. We estimate the median Expected Annual Damage (EAD) and Value at Risk at 99.5% confidence for Germany to be euro0.529 bn and euro8.865 bn, respectively. The commercial sector dominates by making about 60% of the total risk, followed by the residential sector. The agriculture sector gets affected by small return period floods and only contributes to less than 3% to the total risk. The overall EAD is comparable to other large-scale estimates. However, the estimation of losses for specific return periods is substantially improved. The spatial consistency of the risk estimates avoids the large overestimation of losses for rare events that is common in other large-scale assessments with homogeneous return periods. Thus, the process-based, spatially consistent flood risk assessment by RFM is an important step forward and will serve as a benchmark for future German-wide flood risk assessments.
For attributing hydrological changes to anthropogenic climate change, catchment models are driven by climate model output. A widespread approach to bridge the spatial gap between global climate and hydrological catchment models is to use a weather generator conditioned on weather patterns (WPs). This approach assumes that changes in local climate are characterized by between-type changes of patterns. In this study we test this assumption by analyzing a previously developed WP classification for the Rhine basin, which is based on dynamic and thermodynamic variables. We quantify changes in pattern characteristics and associated climatic properties. The amount of between- and within-type changes is investigated by comparing observed trends to trends resulting solely from WP occurrence. To overcome uncertainties in trend detection resulting from the selected time period, all possible periods in 1901-2010 with a minimum length of 31 years are analyzed. Increasing frequency is found for some patterns associated with high precipitation, although the trend sign highly depends on the considered period. Trends and interannual variations of WP frequencies are related to the long-term variability of large-scale circulation modes. Long-term WP internal warming is evident for summer patterns and enhanced warming for spring/autumn patterns since the 1970s. Observed trends in temperature and partly in precipitation are mainly associated with frequency changes of specific WPs, but some amount of within-type changes remains. The classification can be used for downscaling of past changes considering this limitation, but the inclusion of thermodynamic variables into the classification impedes the downscaling of future climate projections.
To understand past flood changes in the Rhine catchment and in particular the role of anthropogenic climate change in extreme flows, an attribution study relying on a proper GCM (general circulation model) downscaling is needed. A downscaling based on conditioning a stochastic weather generator on weather patterns is a promising approach. This approach assumes a strong link between weather patterns and local climate, and sufficient GCM skill in reproducing weather pattern climatology. These presuppositions are unprecedentedly evaluated here using 111 years of daily climate data from 490 stations in the Rhine basin and comprehensively testing the number of classification parameters and GCM weather pattern characteristics. A classification based on a combination of mean sea level pressure, temperature, and humidity from the ERA20C reanalysis of atmospheric fields over central Europe with 40 weather types was found to be the most appropriate for stratifying six local climate variables. The corresponding skill is quite diverse though, ranging from good for radiation to poor for precipitation. Especially for the latter it was apparent that pressure fields alone cannot sufficiently stratify local variability. To test the skill of the latest generation of GCMs from the CMIP5 ensemble in reproducing the frequency, seasonality, and persistence of the derived weather patterns, output from 15 GCMs is evaluated. Most GCMs are able to capture these characteristics well, but some models showed consistent deviations in all three evaluation criteria and should be excluded from further attribution analysis.
To understand past flood changes in the Rhine catchment and in particular the role of anthropogenic climate change in extreme flows, an attribution study relying on a proper GCM (general circulation model) downscaling is needed. A downscaling based on conditioning a stochastic weather generator on weather patterns is a promising approach. This approach assumes a strong link between weather patterns and local climate, and sufficient GCM skill in reproducing weather pattern climatology. These presuppositions are unprecedentedly evaluated here using 111 years of daily climate data from 490 stations in the Rhine basin and comprehensively testing the number of classification parameters and GCM weather pattern characteristics. A classification based on a combination of mean sea level pressure, temperature, and humidity from the ERA20C reanalysis of atmospheric fields over central Europe with 40 weather types was found to be the most appropriate for stratifying six local climate variables. The corresponding skill is quite diverse though, ranging from good for radiation to poor for precipitation. Especially for the latter it was apparent that pressure fields alone cannot sufficiently stratify local variability. To test the skill of the latest generation of GCMs from the CMIP5 ensemble in reproducing the frequency, seasonality, and persistence of the derived weather patterns, output from 15 GCMs is evaluated. Most GCMs are able to capture these characteristics well, but some models showed consistent deviations in all three evaluation criteria and should be excluded from further attribution analysis.