Copernicus
Refine
Has Fulltext
- yes (12)
Document Type
- Postprint (12)
Language
- English (12)
Is part of the Bibliography
- yes (12)
Keywords
- model (12) (remove)
Flood risk is impacted by a range of physical and socio-economic processes. Hence, the quantification of flood risk ideally considers the complete flood risk chain, from atmospheric processes through catchment and river system processes to damage mechanisms in the affected areas. Although it is generally accepted that a multitude of changes along the risk chain can occur and impact flood risk, there is a lack of knowledge of how and to what extent changes in influencing factors propagate through the chain and finally affect flood risk. To fill this gap, we present a comprehensive sensitivity analysis which considers changes in all risk components, i.e. changes in climate, catchment, river system, land use, assets, and vulnerability. The application of this framework to the mesoscale Mulde catchment in Germany shows that flood risk can vary dramatically as a consequence of plausible change scenarios. It further reveals that components that have not received much attention, such as changes in dike systems or in vulnerability, may outweigh changes in often investigated components, such as climate. Although the specific results are conditional on the case study area and the selected assumptions, they emphasize the need for a broader consideration of potential drivers of change in a comprehensive way. Hence, our approach contributes to a better understanding of how the different risk components influence the overall flood risk.
Flash floods are caused by intense rainfall events and represent an insufficiently understood phenomenon in Germany. As a result of higher precipitation intensities, flash floods might occur more frequently in future. In combination with changing land use patterns and urbanisation, damage mitigation, insurance and risk management in flash-flood-prone regions are becoming increasingly important. However, a better understanding of damage caused by flash floods requires ex post collection of relevant but yet sparsely available information for research. At the end of May 2016, very high and concentrated rainfall intensities led to severe flash floods in several southern German municipalities. The small town of Braunsbach stood as a prime example of the devastating potential of such events. Eight to ten days after the flash flood event, damage assessment and data collection were conducted in Braunsbach by investigating all affected buildings and their surroundings. To record and store the data on site, the open-source software bundle KoBoCollect was used as an efficient and easy way to gather information. Since the damage driving factors of flash floods are expected to differ from those of riverine flooding, a post-hoc data analysis was performed, aiming to identify the influence of flood processes and building attributes on damage grades, which reflect the extent of structural damage. Data analyses include the application of random forest, a random general linear model and multinomial logistic regression as well as the construction of a local impact map to reveal influences on the damage grades. Further, a Spearman's Rho correlation matrix was calculated. The results reveal that the damage driving factors of flash floods differ from those of riverine floods to a certain extent. The exposition of a building in flow direction shows an especially strong correlation with the damage grade and has a high predictive power within the constructed damage models. Additionally, the results suggest that building materials as well as various building aspects, such as the existence of a shop window and the surroundings, might have an effect on the resulting damage. To verify and confirm the outcomes as well as to support future mitigation strategies, risk management and planning, more comprehensive and systematic data collection is necessary.
Linking together the processes of rapid physical erosion and the resultant chemical dissolution of rock is a crucial step in building an overall deterministic understanding of weathering in mountain belts. Landslides, which are the most volumetrically important geomorphic process at these high rates of erosion, can generate extremely high rates of very localised weathering. To elucidate how this process works we have taken advantage of uniquely intense landsliding, resulting from Typhoon Morakot, in the T'aimali River and surrounds in southern Taiwan. Combining detailed analysis of landslide seepage chemistry with estimates of catchment-by-catchment landslide volumes, we demonstrate that in this setting the primary role of landslides is to introduce fresh, highly labile mineral phases into the surface weathering environment. There, rapid weathering is driven by the oxidation of pyrite and the resultant sulfuric-acid-driven dissolution of primarily carbonate rock. The total dissolved load correlates well with dissolved sulfate - the chief product of this style of weathering - in both landslides and streams draining the area (R-2 = 0.841 and 0.929 respectively; p < 0.001 in both cases), with solute chemistry in seepage from landslides and catchments affected by significant landsliding governed by the same weathering reactions. The predominance of coupled carbonate-sulfuric-acid-driven weathering is the key difference between these sites and previously studied landslides in New Zealand (Emberson et al., 2016), but in both settings increasing volumes of landslides drive greater overall solute concentrations in streams.
Bedrock landslides, by excavating deep below saprolite-rock interfaces, create conditions for weathering in which all mineral phases in a lithology are initially unweathered within landslide deposits. As a result, the most labile phases dominate the weathering immediately after mobilisation and during a transient period of depletion. This mode of dissolution can strongly alter the overall output of solutes from catchments and their contribution to global chemical cycles if landslide-derived material is retained in catchments for extended periods after mass wasting.
Soil properties show high heterogeneity at different spatial scales and their correct characterization remains a crucial challenge over large areas. The aim of the study is to quantify the impact of different types of uncertainties that arise from the unresolved soil spatial variability on simulated hydrological states and fluxes. Three perturbation methods are presented for the characterization of uncertainties in soil properties. The methods are applied on the soil map of the upper Neckar catchment (Germany), as an example. The uncertainties are propagated through the distributed mesoscale hydrological model (mHM) to assess the impact on the simulated states and fluxes. The model outputs are analysed by aggregating the results at different spatial and temporal scales. These results show that the impact of the different uncertainties introduced in the original soil map is equivalent when the simulated model outputs are analysed at the model grid resolution (i.e. 500 m). However, several differences are identified by aggregating states and fluxes at different spatial scales (by subcatchments of different sizes or coarsening the grid resolution). Streamflow is only sensitive to the perturbation of long spatial structures while distributed states and fluxes (e.g. soil moisture and groundwater recharge) are only sensitive to the local noise introduced to the original soil properties. A clear identification of the temporal and spatial scale for which finer-resolution soil information is (or is not) relevant is unlikely to be universal. However, the comparison of the impacts on the different hydrological components can be used to prioritize the model improvements in specific applications, either by collecting new measurements or by calibration and data assimilation approaches. In conclusion, the study underlines the importance of a correct characterization of uncertainty in soil properties. With that, soil maps with additional information regarding the unresolved soil spatial variability would provide strong support to hydrological modelling applications.
Most climate change impacts manifest in the form of natural hazards. Damage assessment typically relies on damage functions that translate the magnitude of extreme events to a quantifiable damage. In practice, the availability of damage functions is limited due to a lack of data sources and a lack of understanding of damage processes. The study of the characteristics of damage functions for different hazards could strengthen the theoretical foundation of damage functions and support their development and validation. Accordingly, we investigate analogies of damage functions for coastal flooding and for wind storms and identify a unified approach. This approach has general applicability for granular portfolios and may also be applied, for example, to heat-related mortality. Moreover, the unification enables the transfer of methodology between hazards and a consistent treatment of uncertainty. This is demonstrated by a sensitivity analysis on the basis of two simple case studies (for coastal flood and storm damage). The analysis reveals the relevance of the various uncertainty sources at varying hazard magnitude and on both the microscale and the macroscale level. Main findings are the dominance of uncertainty from the hazard magnitude and the persistent behaviour of intrinsic uncertainties on both scale levels. Our results shed light on the general role of uncertainties and provide useful insight for the application of the unified approach.
Even if greenhouse gas emissions were stopped today, sea level would continue to rise for centuries, with the long-term sea-level commitment of a 2 degrees C warmer world significantly exceeding 2 m. In view of the potential implications for coastal populations and ecosystems worldwide, we investigate, from an ice-dynamic perspective, the possibility of delaying sea-level rise by pumping ocean water onto the surface of the Antarctic ice sheet. We find that due to wave propagation ice is discharged much faster back into the ocean than would be expected from a pure advection with surface velocities. The delay time depends strongly on the distance from the coastline at which the additional mass is placed and less strongly on the rate of sea-level rise that is mitigated. A millennium-scale storage of at least 80% of the additional ice requires placing it at a distance of at least 700 km from the coastline. The pumping energy required to elevate the potential energy of ocean water to mitigate the currently observed 3 mmyr(-1) will exceed 7% of the current global primary energy supply. At the same time, the approach offers a comprehensive protection for entire coastlines particularly including regions that cannot be protected by dikes.
With accelerating climate cooling in the late Cenozoic, glacial and periglacial erosion became more widespread on the surface of the Earth. The resultant shift in erosion patterns significantly changed the large-scale morphology of many mountain ranges worldwide. Whereas the glacial fingerprint is easily distinguished by its characteristic fjords and U-shaped valleys, the periglacial fingerprint is more subtle but potentially prevails in some mid- to high-latitude landscapes. Previous models have advocated a frost-driven control on debris production at steep headwalls and glacial valley sides. Here we investigate the important role that periglacial processes also play in less steep parts of mountain landscapes. Understanding the influences of frost-driven processes in low-relief areas requires a focus on the consequences of an accreting soil mantle, which characterises such surfaces. We present a new model that quantifies two key physical processes: frost cracking and frost creep, as a function of both temperature and sediment thickness. Our results yield new insights into how climate and sediment transport properties combine to scale the intensity of periglacial processes. The thickness of the soil mantle strongly modulates the relation between climate and the intensity of mechanical weathering and sediment flux. Our results also point to an offset between the conditions that promote frost cracking and those that promote frost creep, indicating that a stable climate can provide optimal conditions for only one of those processes at a time. Finally, quantifying these relations also opens up the possibility of including periglacial processes in large-scale, long-term landscape evolution models, as demonstrated in a companion paper.
In recent decades, the Greenland Ice Sheet has been losing mass and has thereby contributed to global sea-level rise. The rate of ice loss is highly relevant for coastal protection worldwide. The ice loss is likely to increase under future warming. Beyond a critical temperature threshold, a meltdown of the Greenland Ice Sheet is induced by the self-enforcing feedback between its lowering surface elevation and its increasing surface mass loss: the more ice that is lost, the lower the ice surface and the warmer the surface air temperature, which fosters further melting and ice loss. The computation of this rate so far relies on complex numerical models which are the appropriate tools for capturing the complexity of the problem. By contrast we aim here at gaining a conceptual understanding by deriving a purposefully simple equation for the self-enforcing feedback which is then used to estimate the melt time for different levels of warming using three observable characteristics of the ice sheet itself and its surroundings. The analysis is purely conceptual in nature. It is missing important processes like ice dynamics for it to be useful for applications to sea-level rise on centennial timescales, but if the volume loss is dominated by the feedback, the resulting logarithmic equation unifies existing numerical simulations and shows that the melt time depends strongly on the level of warming with a critical slow-down near the threshold: the median time to lose 10% of the present-day ice volume varies between about 3500 years for a temperature level of 0.5 degrees C above the threshold and 500 years for 5 degrees C. Unless future observations show a significantly higher melting sensitivity than currently observed, a complete meltdown is unlikely within the next 2000 years without significant ice-dynamical contributions.
Isostasy is one of the oldest and most widely applied concepts in the geosciences, but the geoscientific community lacks a coherent, easy-to-use tool to simulate flexure of a realistic (i.e., laterally heterogeneous) lithosphere under an arbitrary set of surface loads. Such a model is needed for studies of mountain building, sedimentary basin formation, glaciation, sea-level change, and other tectonic, geodynamic, and surface processes. Here I present gFlex (for GNU flexure), an open-source model that can produce analytical and finite difference solutions for lithospheric flexure in one (profile) and two (map view) dimensions. To simulate the flexural isostatic response to an imposed load, it can be used by itself or within GRASS GIS for better integration with field data. gFlex is also a component with the Community Surface Dynamics Modeling System (CSDMS) and Landlab modeling frameworks for coupling with a wide range of Earth-surface-related models, and can be coupled to additional models within Python scripts. As an example of this in-script coupling, I simulate the effects of spatially variable lithospheric thickness on a modeled Iceland ice cap. Finite difference solutions in gFlex can use any of five types of boundary conditions: 0-displacement, 0-slope (i.e., clamped); 0-slope, 0-shear; 0-moment, 0-shear (i.e., broken plate); mirror symmetry; and periodic. Typical calculations with gFlex require << 1 s to similar to 1 min on a personal laptop computer. These characteristics - multiple ways to run the model, multiple solution methods, multiple boundary conditions, and short compute time - make gFlex an effective tool for flexural isostatic modeling across the geosciences.
Winter storms are the most costly natural hazard for European residential property. We compare four distinct storm damage functions with respect to their forecast accuracy and variability, with particular regard to the most severe winter storms. The analysis focuses on daily loss estimates under differing spatial aggregation, ranging from district to country level. We discuss the broad and heavily skewed distribution of insured losses posing difficulties for both the calibration and the evaluation of damage functions. From theoretical considerations, we provide a synthesis between the frequently discussed cubic wind–damage relationship and recent studies that report much steeper damage functions for European winter storms. The performance of the storm loss models is evaluated for two sources of wind gust data, direct observations by the German Weather Service and ERA-Interim reanalysis data. While the choice of gust data has little impact on the evaluation of German storm loss, spatially resolved coefficients of variation reveal dependence between model and data choice. The comparison shows that the probabilistic models by Heneka et al. (2006) and Prahl et al. (2012) both provide accurate loss predictions for moderate to extreme losses, with generally small coefficients of variation. We favour the latter model in terms of model applicability. Application of the versatile deterministic model by Klawa and Ulbrich (2003) should be restricted to extreme loss, for which it shows the least bias and errors comparable to the probabilistic model by Prahl et al. (2012).