Refine
Has Fulltext
- yes (23) (remove)
Year of publication
- 2018 (23) (remove)
Document Type
- Article (10)
- Postprint (8)
- Doctoral Thesis (3)
- Bachelor Thesis (1)
- Monograph/Edited Volume (1)
Keywords
- catchment (2)
- uncertainty (2)
- Analyse von Abflussganglinien (1)
- Angstraum (1)
- Big data mining zu Hochwasserrisiken (1)
- Bildung für nachhaltige Entwicklung (1)
- Chaos Theory (1)
- Chaostheorie (1)
- Euglyphida (1)
- Europe (1)
Institute
- Institut für Umweltwissenschaften und Geographie (23) (remove)
Reviews and syntheses
(2018)
The cycling of carbon (C) between the Earth surface and the atmosphere is controlled by biological and abiotic processes that regulate C storage in biogeochemical compartments and release to the atmosphere. This partitioning is quantified using various forms of C-use efficiency (CUE) - the ratio of C remaining in a system to C entering that system. Biological CUE is the fraction of C taken up allocated to biosynthesis. In soils and sediments, C storage depends also on abiotic processes, so the term C-storage efficiency (CSE) can be used. Here we first review and reconcile CUE and CSE definitions proposed for autotrophic and heterotrophic organisms and communities, food webs, whole ecosystems and watersheds, and soils and sediments using a common mathematical framework. Second, we identify general CUE patterns; for example, the actual CUE increases with improving growth conditions, and apparent CUE decreases with increasing turnover. We then synthesize > 5000CUE estimates showing that CUE decreases with increasing biological and ecological organization - from uni-cellular to multicellular organisms and from individuals to ecosystems. We conclude that CUE is an emergent property of coupled biological-abiotic systems, and it should be regarded as a flexible and scale-dependent index of the capacity of a given system to effectively retain C.
Peatlands represent large terrestrial carbon banks. Given that most peat accumulates in boreal regions, where low temperatures and water saturation preserve organic matter, the existence of peat in (sub)tropical regions remains enigmatic. Here we examined peat and plant chemistry across a latitudinal transect from the Arctic to the tropics. Near-surface low-latitude peat has lower carbohydrate and greater aromatic content than near-surface high-latitude peat, creating a reduced oxidation state and resulting recalcitrance. This recalcitrance allows peat to persist in the (sub)tropics despite warm temperatures. Because we observed similar declines in carbohydrate content with depth in high-latitude peat, our data explain recent field-scale deep peat warming experiments in which catotelm (deeper) peat remained stable despite temperature increases up to 9 degrees C. We suggest that high-latitude deep peat reservoirs may be stabilized in the face of climate change by their ultimately lower carbohydrate and higher aromatic composition, similar to tropical peats.
With the growing size and use of night light time series from the Visible Infrared Imaging Radiometer Suite Day/Night Band (DNB), it is important to understand the stability of the dataset. All satellites observe differences in pixel values during repeat observations. In the case of night light data, these changes can be due to both environmental effects and changes in light emission. Here we examine the stability of individual locations of particular large scale light sources (e.g., airports and prisons) in the monthly composites of DNB data from April 2012 to September 2017. The radiances for individual pixels of most large light emitters are approximately normally distributed, with a standard deviation of typically 15-20% of the mean. Greenhouses and flares, however, are not stable sources. We observe geospatial autocorrelation in the monthly variations for nearby sites, while the correlation for sites separated by large distances is small. This suggests that local factors contribute most to the variation in the pixel radiances and furthermore that averaging radiances over large areas will reduce the total variation. A better understanding of the causes of temporal variation would improve the sensitivity of DNB to lighting changes.
Physical and hydrological properties of peat as proxies for degradation of South African peatlands
(2018)
The physical and hydrological properties of peat from seven peatlands in northern Maputaland (South Africa) were investigated and related to the degradation processes of peatlands in different hydrogeomorphic settings. The selected peatlands are representative of typical hydrogeomorphic settings and different stages of human modification from natural to severely degraded. Nineteen transects (141 soil corings in total) were examined in order to describe peat properties typical of the distinct hydrogeomorphic settings. We studied degree of decomposition, organic matter content, bulk density, water retention, saturated hydraulic conductivity and hydrophobicity of the peats. From these properties we derived pore size distribution, unsaturated hydraulic conductivity and maximum capillary rise. We found that, after drainage, degradation advances faster in peatlands containing wood peat than in peatlands containing radicell peat. Eucalyptus plantations in catchment areas are especially threatening to peatlands in seeps, interdune depressions and unchannelled valley bottoms. All peatlands and their recharge areas require wise management, especially valley-bottom peatlands with swamp forest vegetation. Blocking drainage ditches is indispensable as a first step towards achieving the restoration of drained peatland areas, and further measures may be necessary to enhance the distribution of water. The sensitive swamp forest ecosystems should be given conservation priority.
The evaluation and verification of landscape evolution models (LEMs) has long been limited by a lack of suitable observational data and statistical measures which can fully capture the complexity of landscape changes. This lack of data limits the use of objective function based evaluation prolific in other modelling fields, and restricts the application of sensitivity analyses in the models and the consequent assessment of model uncertainties. To overcome this deficiency, a novel model function approach has been developed, with each model function representing an aspect of model behaviour, which allows for the application of sensitivity analyses. The model function approach is used to assess the relative sensitivity of the CAESAR-Lisflood LEM to a set of model parameters by applying the Morris method sensitivity analysis for two contrasting catchments. The test revealed that the model was most sensitive to the choice of the sediment transport formula for both catchments, and that each parameter influenced model behaviours differently, with model functions relating to internal geomorphic changes responding in a different way to those relating to the sediment yields from the catchment outlet. The model functions proved useful for providing a way of evaluating the sensitivity of LEMs in the absence of data and methods for an objective function approach.
Natural catchments are likely to show the existence of knickpoints in their river networks. The origin and genesis of the knickpoints can be manifold, considering that the present morphology is the result of the interactions of different factors such as tectonic movements, quaternary glaciations, river captures, variable lithology, and base-level changes. We analyzed the longitudinal profiles of the river channels in the Stura di Demonte Valley (Maritime Alps) to identify the knickpoints of such an alpine setting and to characterize their origins. The distribution and the geometry of stream profiles were used to identify the possible causes of the changes in stream gradients and to define zones with genetically linked knickpoints. Knickpoints are key geomorphological features for reconstructing the evolution of fluvial dissected basins, when the different perturbing factors affecting the ideally graded fluvial system have been detected. This study shows that even in a regionally small area, perturbations of river profiles are caused by multiple factors. Thus, attributing (automatically)-extracted knickpoints solely to one factor, can potentially lead to incomplete interpretations of catchment evolution.
Flood risk is impacted by a range of physical and socio-economic processes. Hence, the quantification of flood risk ideally considers the complete flood risk chain, from atmospheric processes through catchment and river system processes to damage mechanisms in the affected areas. Although it is generally accepted that a multitude of changes along the risk chain can occur and impact flood risk, there is a lack of knowledge of how and to what extent changes in influencing factors propagate through the chain and finally affect flood risk. To fill this gap, we present a comprehensive sensitivity analysis which considers changes in all risk components, i.e. changes in climate, catchment, river system, land use, assets, and vulnerability. The application of this framework to the mesoscale Mulde catchment in Germany shows that flood risk can vary dramatically as a consequence of plausible change scenarios. It further reveals that components that have not received much attention, such as changes in dike systems or in vulnerability, may outweigh changes in often investigated components, such as climate. Although the specific results are conditional on the case study area and the selected assumptions, they emphasize the need for a broader consideration of potential drivers of change in a comprehensive way. Hence, our approach contributes to a better understanding of how the different risk components influence the overall flood risk.
The dataset in the present article provides information on protozoic silicon (Si) pools represented by euglyphid testate amoebae (TA) in soils of initial and forested biogeosystems. Protozoic Si pools were calculated from densities of euglyphid TA shells and corresponding Si contents. The article also includes data on potential annual biosilicification rates of euglyphid TA at the examined sites. Furthermore, data on selected soil parameters (e.g., readily-available Si, soil pH) and site characteristics (e.g., soil groups, climate data) can be found. The data might be interesting for researchers focusing on biological processes in Si cycling in general and euglyphid TA and corresponding protozoic Si pools in particular.
Diverse Entwicklungen der letzten Jahrzehnte zeigten die Relevanz am Diskurs um eine sogenannte „nachhaltige Entwicklung“ auf. Nachhaltiger Entwicklung wird dabei eine immer größere Bedeutung zugesprochen und zudem wird die Bildung als eine der wichtigsten Kräfte, um eine nachhaltige Entwicklung voranzutreiben, angesehen. Im Rahmen der Bachelorarbeit soll deshalb untersucht werden, welches Verständnis Schülerinnen und Schüler vom Begriff Nachhaltigkeit haben. Zunächst wird der theoretische Hintergrund zu nachhaltiger Entwicklung und einer „Bildung für nachhaltige Entwicklung“ geklärt. Auf Basis dieser theoretischen Fundierung wird dann ein leitfadengestütztes Interview entwickelt. Aus den Ergebnissen sollen unter Verwendung der zusammenfassenden Inhaltsanalyse nach Mayring Rückschlüsse über das Verständnis der Schüler*innen gezogen werden. Auf der Basis der Ergebnisse und Interpretationen sollen abschließend Überlegungen gemacht werden, wie das Verständnis der Schüler*innen erweitert werden kann. Im Rahmen der Untersuchung wurden schließlich sechs Schülerinnen und Schüler der Jahrgangsstufe zehn einer Gesamtschule mit einem Interview befragt. Es wurde festgestellt, dass ein Verständnis von Nachhaltigkeit nur bei vier der sechs Befragten vorhanden war und auch dort größtenteils in Bezug auf ökologische und soziale Aspekte. Dabei konnten das persönliche Interesse, der Lebensweltbezug, und auch der Unterricht als Grund für beide Seiten ausgemacht werden.
This paper introduces a novel measure to assess similarity between event hydrographs. It is based on Cross Recurrence Plots and Recurrence Quantification Analysis which have recently gained attention in a range of disciplines when dealing with complex systems. The method attempts to quantify the event runoff dynamics and is based on the time delay embedded phase space representation of discharge hydrographs. A phase space trajectory is reconstructed from the event hydrograph, and pairs of hydrographs are compared to each other based on the distance of their phase space trajectories. Time delay embedding allows considering the multi-dimensional relationships between different points in time within the event. Hence, the temporal succession of discharge values is taken into account, such as the impact of the initial conditions on the runoff event. We provide an introduction to Cross Recurrence Plots and discuss their parameterization. An application example based on flood time series demonstrates how the method can be used to measure the similarity or dissimilarity of events, and how it can be used to detect events with rare runoff dynamics. It is argued that this methods provides a more comprehensive approach to quantify hydrograph similarity compared to conventional hydrological signatures.
Today, more than half of the world’s population lives in urban areas. With a high density of population and assets, urban areas are not only the economic, cultural and social hubs of every society, they are also highly susceptible to natural disasters. As a consequence of rising sea levels and an expected increase in extreme weather events caused by a changing climate in combination with growing cities, flooding is an increasing threat to many urban agglomerations around the globe.
To mitigate the destructive consequences of flooding, appropriate risk management and adaptation strategies are required. So far, flood risk management in urban areas is almost exclusively focused on managing river and coastal flooding. Often overlooked is the risk from small-scale rainfall-triggered flooding, where the rainfall intensity of rainstorms exceeds the capacity of urban drainage systems, leading to immediate flooding. Referred to as pluvial flooding, this flood type exclusive to urban areas has caused severe losses in cities around the world. Without further intervention, losses from pluvial flooding are expected to increase in many urban areas due to an increase of impervious surfaces compounded with an aging drainage infrastructure and a projected increase in heavy precipitation events. While this requires the integration of pluvial flood risk into risk management plans, so far little is known about the adverse consequences of pluvial flooding due to a lack of both detailed data sets and studies on pluvial flood impacts. As a consequence, methods for reliably estimating pluvial flood losses, needed for pluvial flood risk assessment, are still missing.
Therefore, this thesis investigates how pluvial flood losses to private households can be reliably estimated, based on an improved understanding of the drivers of pluvial flood loss. For this purpose, detailed data from pluvial flood-affected households was collected through structured telephone- and web-surveys following pluvial flood events in Germany and the Netherlands.
Pluvial flood losses to households are the result of complex interactions between impact characteristics such as the water depth and a household’s resistance as determined by its risk awareness, preparedness, emergency response, building properties and other influencing factors. Both exploratory analysis and machine-learning approaches were used to analyze differences in resistance and impacts between households and their effects on the resulting losses. The comparison of case studies showed that the awareness around pluvial flooding among private households is quite low. Low awareness not only challenges the effective dissemination of early warnings, but was also found to influence the implementation of private precautionary measures. The latter were predominately implemented by households with previous experience of pluvial flooding. Even cases where previous flood events affected a different part of the same city did not lead to an increase in preparedness of the surveyed households, highlighting the need to account for small-scale variability in both impact and resistance parameters when assessing pluvial flood risk.
While it was concluded that the combination of low awareness, ineffective early warning and the fact that only a minority of buildings were adapted to pluvial flooding impaired the coping capacities of private households, the often low water levels still enabled households to mitigate or even prevent losses through a timely and effective emergency response.
These findings were confirmed by the detection of loss-influencing variables, showing that cases in which households were able to prevent any loss to the building structure are predominately explained by resistance variables such as the household’s risk awareness, while the degree of loss is mainly explained by impact variables.
Based on the important loss-influencing variables detected, different flood loss models were developed. Similar to flood loss models for river floods, the empirical data from the preceding data collection was used to train flood loss models describing the relationship between impact and resistance parameters and the resulting loss to building structures. Different approaches were adapted from river flood loss models using both models with the water depth as only predictor for building structure loss and models incorporating additional variables from the preceding variable detection routine.
The high predictive errors of all compared models showed that point predictions are not suitable for estimating losses on the building level, as they severely impair the reliability of the estimates. For that reason, a new probabilistic framework based on Bayesian inference was introduced that is able to provide predictive distributions instead of single loss estimates. These distributions not only give a range of probable losses, they also provide information on how likely a specific loss value is, representing the uncertainty in the loss estimate.
Using probabilistic loss models, it was found that the certainty and reliability of a loss estimate on the building level is not only determined by the use of additional predictors as shown in previous studies, but also by the choice of response distribution defining the shape of the predictive distribution. Here, a mix between a beta and a Bernoulli distribution to account for households that are able to prevent losses to their building’s structure was found to provide significantly more certain and reliable estimates than previous approaches using Gaussian or non-parametric response distributions.
The successful model transfer and post-event application to estimate building structure loss in Houston, TX, caused by pluvial flooding during Hurricane Harvey confirmed previous findings, and demonstrated the potential of the newly developed multi-variable beta model for future risk assessments. The highly detailed input data set constructed from openly available data sources containing over 304,000 affected buildings in Harris County further showed the potential of data-driven, building-level loss models for pluvial flood risk assessment.
In conclusion, pluvial flood losses to private households are the result of complex interactions between impact and resistance variables, which should be represented in loss models. The local occurrence of pluvial floods requires loss estimates on high spatial resolutions, i.e. on the building level, where losses are variable and uncertainties are high.
Therefore, probabilistic loss estimates describing the uncertainty of the estimate should be used instead of point predictions. While the performance of probabilistic models on the building level are mainly driven by the choice of response distribution, multi-variable models are recommended for two reasons:
First, additional resistance variables improve the detection of cases in which households were able to prevent structural losses.
Second, the added variability of additional predictors provides a better representation of the uncertainties when loss estimates from multiple buildings are aggregated.
This leads to the conclusion that data-driven probabilistic loss models on the building level allow for a reliable loss estimation at an unprecedented level of detail, with a consistent quantification of uncertainties on all aggregation levels. This makes the presented approach suitable for a wide range of applications, from decision support in spatial planning to impact- based early warning systems.
Uncertainty is an essential part of atmospheric processes and thus inherent to weather forecasts. Nevertheless, weather forecasts and warnings are still predominately issued as deterministic (yes or no) forecasts, although research suggests that providing weather forecast users with additional information about the forecast uncertainty can enhance the preparation of mitigation measures. Communicating forecast uncertainty would allow for a provision of information on possible future events at an earlier time. The desired benefit is to enable the users to start with preparatory protective action at an earlier stage of time based on the their own risk assessment and decision threshold. But not all users have the same threshold for taking action. In the course of the project WEXICOM (‘Wetterwarnungen: Von der Extremereignis-Information zu Kommunikation und Handlung’) funded by the Deutscher Wetterdienst (DWD), three studies were conducted between the years 2012 and 2016 to reveal how weather forecasts and warnings are reflected in weather-related decision-making. The studies asked which factors influence the perception of forecasts and the decision to take protective action and how forecast users make sense of probabilistic information and the additional lead time. In a first exploratory study conducted in 2012, members of emergency services in Germany were asked questions about how weather warnings are communicated to professional endusers in the emergency community and how the warnings are converted into mitigation measures. A large number of open questions were selected to identify new topics of interest. The questions covered topics like users’ confidence in forecasts, their understanding of probabilistic information as well as their lead time and decision thresholds to start with preparatory mitigation measures. Results show that emergency service personnel generally have a good sense of uncertainty inherent in weather forecasts. Although no single probability threshold could be identified for organisations to start with preparatory mitigation measures, it became clear that emergency services tend to avoid forecasts based on low probabilities as a basis for their decisions. Based on this findings, a second study conducted with residents of Berlin in 2014 further investigated the question of decision thresholds. The survey questions related to the topics of the perception of and prior experience with severe weather, trustworthiness of forecasters and confidence in weather forecasts, and socio-demographic and social-economic characteristics. Within the questionnaire a scenario was created to determine individual decision thresholds and see whether subgroups of the sample lead to different thresholds. The results show that people’s willingness to act tends to be higher and decision thresholds tend to be lower if the expected weather event is more severe or the property at risk is of higher value. Several influencing factors of risk perception have significant effects such as education, housing status and ability to act, whereas socio-demographic determinants alone are often not sufficient to fully grasp risk perception and protection behaviour. Parallel to the quantitative studies, an interview study was conducted with 27 members of German civil protection between 2012 and 2016. The results show that the latest developments in (numerical) weather forecasting do not necessarily fit the current practice of German emergency services. These practices are mostly carried out on alarms and ground truth in a reactive manner rather than on anticipation based on prognosis or forecasts. As the potential consequences rather than the event characteristics determine protective action, the findings support the call and need for impact-based warnings. Forecasters will rely on impact data and need to learn the users’ understanding of impact. Therefore, it is recommended to enhance weather communication not only by improving computer models and observation tools, but also by focusing on the aspects of communication and collaboration. Using information about uncertainty demands awareness about and acceptance of the limits of knowledge, hence, the capabilities of the forecaster to anticipate future developments of the atmosphere and the capabilities of the user to make sense of this information.
Schwarz-Rot-Geil
(2018)
Die Atmosphäre im Karli
(2018)
Räume, Linien, Punkte
(2018)