Refine
Year of publication
- 2021 (87) (remove)
Document Type
- Article (64)
- Doctoral Thesis (9)
- Postprint (8)
- Bachelor Thesis (2)
- Report (2)
- Conference Proceeding (1)
- Habilitation Thesis (1)
Is part of the Bibliography
- yes (87)
Keywords
- Germany (4)
- flood risk (4)
- Air pollution (3)
- Extreme events (3)
- land use (3)
- preparedness (3)
- Chile (2)
- City ranking (2)
- Climate change (2)
- Cluster analysis (2)
Institute
- Institut für Umweltwissenschaften und Geographie (87) (remove)
Precipitation forecasting has an important place in everyday life – during the day we may have tens of small talks discussing the likelihood that it will rain this evening or weekend. Should you take an umbrella for a walk? Or should you invite your friends for a barbecue? It will certainly depend on what your weather application shows.
While for years people were guided by the precipitation forecasts issued for a particular region or city several times a day, the widespread availability of weather radars allowed us to obtain forecasts at much higher spatiotemporal resolution of minutes in time and hundreds of meters in space. Hence, radar-based precipitation nowcasting, that is, very-short-range forecasting (typically up to 1–3 h), has become an essential technique, also in various professional application contexts, e.g., early warning, sewage control, or agriculture.
There are two major components comprising a system for precipitation nowcasting: radar-based precipitation estimates, and models to extrapolate that precipitation to the imminent future. While acknowledging the fundamental importance of radar-based precipitation retrieval for precipitation nowcasts, this thesis focuses only on the model development: the establishment of open and competitive benchmark models, the investigation of the potential of deep learning, and the development of procedures for nowcast errors diagnosis and isolation that can guide model development.
The present landscape of computational models for precipitation nowcasting still struggles with the availability of open software implementations that could serve as benchmarks for measuring progress. Focusing on this gap, we have developed and extensively benchmarked a stack of models based on different optical flow algorithms for the tracking step and a set of parsimonious extrapolation procedures based on image warping and advection. We demonstrate that these models provide skillful predictions comparable with or even superior to state-of-the-art operational software. We distribute the corresponding set of models as a software library, rainymotion, which is written in the Python programming language and openly available at GitHub (https://github.com/hydrogo/rainymotion). That way, the library acts as a tool for providing fast, open, and transparent solutions that could serve as a benchmark for further model development and hypothesis testing.
One of the promising directions for model development is to challenge the potential of deep learning – a subfield of machine learning that refers to artificial neural networks with deep architectures, which may consist of many computational layers. Deep learning showed promising results in many fields of computer science, such as image and speech recognition, or natural language processing, where it started to dramatically outperform reference methods.
The high benefit of using "big data" for training is among the main reasons for that. Hence, the emerging interest in deep learning in atmospheric sciences is also caused and concerted with the increasing availability of data – both observational and model-based. The large archives of weather radar data provide a solid basis for investigation of deep learning potential in precipitation nowcasting: one year of national 5-min composites for Germany comprises around 85 billion data points.
To this aim, we present RainNet, a deep convolutional neural network for radar-based precipitation nowcasting. RainNet was trained to predict continuous precipitation intensities at a lead time of 5 min, using several years of quality-controlled weather radar composites provided by the German Weather Service (DWD). That data set covers Germany with a spatial domain of 900 km x 900 km and has a resolution of 1 km in space and 5 min in time. Independent verification experiments were carried out on 11 summer precipitation events from 2016 to 2017. In these experiments, RainNet was applied recursively in order to achieve lead times of up to 1 h. In the verification experiments, trivial Eulerian persistence and a conventional model based on optical flow served as benchmarks. The latter is available in the previously developed rainymotion library.
RainNet significantly outperformed the benchmark models at all lead times up to 60 min for the routine verification metrics mean absolute error (MAE) and critical success index (CSI) at intensity thresholds of 0.125, 1, and 5 mm/h. However, rainymotion turned out to be superior in predicting the exceedance of higher intensity thresholds (here 10 and 15 mm/h). The limited ability of RainNet to predict high rainfall intensities is an undesirable property which we attribute to a high level of spatial smoothing introduced by the model. At a lead time of 5 min, an analysis of power spectral density confirmed a significant loss of spectral power at length scales of 16 km and below.
Obviously, RainNet had learned an optimal level of smoothing to produce a nowcast at 5 min lead time. In that sense, the loss of spectral power at small scales is informative, too, as it reflects the limits of predictability as a function of spatial scale. Beyond the lead time of 5 min, however, the increasing level of smoothing is a mere artifact – an analogue to numerical diffusion – that is not a property of RainNet itself but of its recursive application. In the context of early warning, the smoothing is particularly unfavorable since pronounced features of intense precipitation tend to get lost over longer lead times. Hence, we propose several options to address this issue in prospective research on model development for precipitation nowcasting, including an adjustment of the loss function for model training, model training for longer lead times, and the prediction of threshold exceedance.
The model development together with the verification experiments for both conventional and deep learning model predictions also revealed the need to better understand the source of forecast errors. Understanding the dominant sources of error in specific situations should help in guiding further model improvement. The total error of a precipitation nowcast consists of an error in the predicted location of a precipitation feature and an error in the change of precipitation intensity over lead time. So far, verification measures did not allow to isolate the location error, making it difficult to specifically improve nowcast models with regard to location prediction.
To fill this gap, we introduced a framework to directly quantify the location error. To that end, we detect and track scale-invariant precipitation features (corners) in radar images. We then consider these observed tracks as the true reference in order to evaluate the performance (or, inversely, the error) of any model that aims to predict the future location of a precipitation feature. Hence, the location error of a forecast at any lead time ahead of the forecast time corresponds to the Euclidean distance between the observed and the predicted feature location at the corresponding lead time.
Based on this framework, we carried out a benchmarking case study using one year worth of weather radar composites of the DWD. We evaluated the performance of four extrapolation models, two of which are based on the linear extrapolation of corner motion; and the remaining two are based on the Dense Inverse Search (DIS) method: motion vectors obtained from DIS are used to predict feature locations by linear and Semi-Lagrangian extrapolation.
For all competing models, the mean location error exceeds a distance of 5 km after 60 min, and 10 km after 110 min. At least 25% of all forecasts exceed an error of 5 km after 50 min, and of 10 km after 90 min. Even for the best models in our experiment, at least 5 percent of the forecasts will have a location error of more than 10 km after 45 min. When we relate such errors to application scenarios that are typically suggested for precipitation nowcasting, e.g., early warning, it becomes obvious that location errors matter: the order of magnitude of these errors is about the same as the typical extent of a convective cell. Hence, the uncertainty of precipitation nowcasts at such length scales – just as a result of locational errors – can be substantial already at lead times of less than 1 h. Being able to quantify the location error should hence guide any model development that is targeted towards its minimization. To that aim, we also consider the high potential of using deep learning architectures specific to the assimilation of sequential (track) data.
Last but not least, the thesis demonstrates the benefits of a general movement towards open science for model development in the field of precipitation nowcasting. All the presented models and frameworks are distributed as open repositories, thus enhancing transparency and reproducibility of the methodological approach. Furthermore, they are readily available to be used for further research studies, as well as for practical applications.
For around a decade, deep learning - the sub-field of machine learning that refers to artificial neural networks comprised of many computational layers - modifies the landscape of statistical model development in many research areas, such as image classification, machine translation, and speech recognition. Geoscientific disciplines in general and the field of hydrology in particular, also do not stand aside from this movement. Recently, the proliferation of modern deep learning-based techniques and methods has been actively gaining popularity for solving a wide range of hydrological problems: modeling and forecasting of river runoff, hydrological model parameters regionalization, assessment of available water resources. identification of the main drivers of the recent change in water balance components. This growing popularity of deep neural networks is primarily due to their high universality and efficiency. The presented qualities, together with the rapidly growing amount of accumulated environmental information, as well as increasing availability of computing facilities and resources, allow us to speak about deep neural networks as a new generation of mathematical models designed to, if not to replace existing solutions, but significantly enrich the field of geophysical processes modeling. This paper provides a brief overview of the current state of the field of development and application of deep neural networks in hydrology. Also in the following study, the qualitative long-term forecast regarding the development of deep learning technology for managing the corresponding hydrological modeling challenges is provided based on the use of "Gartner Hype Curve", which in the general details describes a life cycle of modern technologies.
We systematically explore the effect of calibration data length on the performance of a conceptual hydrological model, GR4H, in comparison to two Artificial Neural Network (ANN) architectures: Long Short-Term Memory Networks (LSTM) and Gated Recurrent Units (GRU), which have just recently been introduced to the field of hydrology. We implemented a case study for six river basins across the contiguous United States, with 25 years of meteorological and discharge data. Nine years were reserved for independent validation; two years were used as a warm-up period, one year for each of the calibration and validation periods, respectively; from the remaining 14 years, we sampled increasing amounts of data for model calibration, and found pronounced differences in model performance. While GR4H required less data to converge, LSTM and GRU caught up at a remarkable rate, considering their number of parameters. Also, LSTM and GRU exhibited the higher calibration instability in comparison to GR4H. These findings confirm the potential of modern deep-learning architectures in rainfall runoff modelling, but also highlight the noticeable differences between them in regard to the effect of calibration data length.
Starkregen in Berlin
(2021)
In den Sommern der Jahre 2017 und 2019 kam es in Berlin an mehreren Orten zu Überschwemmungen in Folge von Starkregenereignissen. In beiden Jahren führte dies zu erheblichen Beeinträchtigungen im Alltag der Berliner:innen sowie zu hohen Sachschäden. Eine interdisziplinäre Taskforce des DFG-Graduiertenkollegs NatRiskChange untersuchte (1) die meteorologischen Eigenschaften zweier besonders eindrücklicher Unwetter, sowie (2) die Vulnerabilität der Berliner Bevölkerung gegenüber Starkregen.
Eine vergleichende meteorologische Rekonstruktion der Starkregenereignisse von 2017 und 2019 ergab deutliche Unterschiede in der Entstehung und den Überschreitungswahrscheinlichkeiten der beiden Unwetter. So war das Ereignis von 2017 mit einer relativ großen räumlichen Ausdehnung und langer Dauer ein untypisches Starkregenereignis, während es sich bei dem Unwetter von 2019 um ein typisches, kurzzeitiges Starkregenereignis mit ausgeprägter räumlicher Heterogenität handelte. Eine anschließende statistische Analyse zeigte, dass das Ereignis von 2017 für längere Niederschlagsdauern (>=24 h) als großflächiges Extremereignis mit Überschreitungswahrscheinlichkeiten von unter 1 % einzuordnen ist (d.h. Wiederkehrperioden >=100 Jahre). Im Jahr 2019 wurden dagegen ähnliche Überschreitungswahrscheinlichkeiten nur lokal und für kürzere Zeiträume (1-2 h) berechnet.
Die Vulnerabilitätsanalyse basiert auf einer von April bis Juni 2020 in Berlin durchgeführten Onlinebefragung. Diese richtete sich an Personen, die bereits von vergangenen Starkregenereignissen betroffen waren und thematisierte das Schadensereignis selbst, daraus entstandene Beeinträchtigungen und Schäden, Risikowahrnehmung sowie Notfall- und Vorsorgemaßnahmen. Die erhobenen Umfragedaten (n=102) beziehen sich vornehmlich auf die Ereignisse von 2017 und 2019 und zeigen, dass die Berliner Bevölkerung sowohl im Alltag (z.B. bei der Beschaffung von Lebensmitteln) als auch im eigenen Haushalt (z.B. durch Überschwemmungsschäden) von den Unwettern beeinträchtigt war. Zudem deuteten die Antworten der Betroffenen auf Möglichkeiten hin, die Vulnerabilität der Gesellschaft gegenüber Starkregen weiter zu reduzieren - etwa durch die Unterstützung besonders betroffener Gruppen (z.B. Pflegende), durch gezielte Informationskampagnen zum Schutz vor Starkregen oder durch die Erhöhung der Reichweite von Unwetterwarnungen. Eine statistische Analyse zur Effektivität privater Notfall- und Vorsorgemaßnahmen auf Grundlage der Umfragedaten bestätigte vorherige Studienergebnisse.
So gab es Anhaltspunkte dafür, dass durch das Umsetzen von Vorsorgemaßnahmen wie beispielsweise das Installieren von Rückstauklappen, Barriere-Systemen oder Pumpen Starkregenschäden reduziert werden können.
Die Ergebnisse dieses Berichts unterstreichen die Notwendigkeit für ein integriertes Starkregenrisikomanagment, das die Risikokomponenten Gefährdung, Vulnerabilität und Exposition ganzheitlich und auf mehreren Ebenen (z.B. staatlich, kommunal, privat) betrachtet.
The efficiency of sediment routing from land to the ocean depends on the position of submarine canyon heads with regard to terrestrial sediment sources. We aim to identify the main controls on whether a submarine canyon head remains connected to terrestrial sediment input during Holocene sea-level rise. Globally, we identified 798 canyon heads that are currently located at the 120m-depth contour (the Last Glacial Maximum shoreline) and 183 canyon heads that are connected to the shore (within a distance of 6 km) during the present-day highstand. Regional hotspots of shore-connected canyons are the Mediterranean active margin and the Pacific coast of Central and South America. We used 34 terrestrial and marine predictor variables to predict shore-connected canyon occurrence using Bayesian regression. Our analysis shows that steep and narrow shelves facilitate canyon-head connectivity to the shore. Moreover, shore-connected canyons occur preferentially along active margins characterized by resistant bedrock and high river-water discharge.
Throughfall, that is, the fraction of rainfall that passes through the forest canopy, is strongly influenced by rainfall and forest stand characteristics which are in turn both subject to seasonal dynamics. Disentangling the complex interplay of these controls is challenging, and only possible with long-term monitoring and a large number of throughfall events measured in parallel at different forest stands. We therefore based our analysis on 346 rainfall events across six different forest stands at the long-term terrestrial environmental observatory TERENO Northeast Germany. These forest stands included pure stands of beech, pine and young pine, and mixed stands of oak-beech, pine-beech and pine-oak-beech. Throughfall was overall relatively low, with 54-68% of incident rainfall in summer. Based on the large number of events it was possible to not only investigate mean or cumulative throughfall but also its statistical distribution. The distributions of throughfall fractions show distinct differences between the three types of forest stands (deciduous, mixed and pine). The distributions of the deciduous stands have a pronounced peak at low throughfall fractions and a secondary peak at high fractions in summer, as well as a pronounced peak at higher throughfall fractions in winter. Interestingly, the mixed stands behave like deciduous stands in summer and like pine stands in winter: their summer distributions are similar to the deciduous stands but the winter peak at high throughfall fractions is much less pronounced. The seasonal comparison further revealed that the wooden components and the leaves behaved differently in their throughfall response to incident rainfall, especially at higher rainfall intensities. These results are of interest for estimating forest water budgets and in the context of hydrological and land surface modelling where poor simulation of throughfall would adversely impact estimates of evaporative recycling and water availability for vegetation and runoff.
Indices of oscillatory behavior are conveniently obtained by projecting the fields in question into a phase space of a few (mostly just two) dimensions; empirical orthogonal functions (EOFs) or other, more dynamical, modes are typically used for the projection. If sufficiently coherent and in quadrature, the projected variables simply describe a rotating vector in the phase space, which then serves as the basis for predictions. Using the boreal summer intraseasonal oscillation (BSISO) as a test case, an alternative procedure is introduced: it augments the original fields with their Hilbert transform (HT) to form a complex series and projects it onto its (single) dominant EOF. The real and imaginary parts of the corresponding complex pattern and index are compared with those of the original (real) EOF. The new index explains slightly less variance of the physical fields than the original, but it is much more coherent, partly from its use of future information by the HT. Because the latter is in the way of real-time monitoring, the index can only be used in cases with predicted physical fields, for which it promises to be superior. By developing a causal approximation of the HT, a real-time variant of the index is obtained whose coherency is comparable to the noncausal version, but with smaller explained variance of the physical fields. In test cases the new index compares well to other indices of BSISO. The potential for using both indices as an alternative is discussed.
Extreme Regenereignisse von kurzer Dauer im Bereich von Stunden und darunter rücken aufgrund der dadurch bedingten Schäden durch Sturzfluten und auch wegen ihrer möglichen Intensivierungen durch den anthropogenen Klimawandel immer stärker in den Fokus. Die vorliegende Studie untersucht auf Basis von teilweise sehr langen (> 50 Jahre) und zeitlich hochaufgelösten Zeitreihen (≤ 15 Minuten) mögliche Trends in Starkregenintensitäten für Stationen aus schweizerischen und österreichischen Alpenregionen sowie für das Emscher-Lippe-Gebiet in Nordrhein-Westfalen. Es wird deutlich, dass es eine Zunahme der extremen Niederschlagsintensitäten gibt, welche gut durch die Erwärmung des regionalen Klimas erklärt werden kann: Die Analysen langfristiger Trends der Überschreitungssummen und Wiederkehrniveaus zeigen zwar erhebliche Unsicherheiten, lassen jedoch eine Zunahme in einer Größenordnung von 30 % pro Jahrhundert erkennen. Zudem wird in diesem Beitrag, basierend auf einer "mittleren" Klimasimulation für das 21. Jahrhundert, für ausgewählte Stationen der Emscher-Lippe-Region eine Projektion für extreme Niederschlagsintensitäten in sehr hoher zeitlicher Auflösung beschrieben. Dabei wird ein gekoppeltes räumliches und zeitliches "Downscaling" angewendet, dessen entscheidende Neuerung die Berücksichtigung der Abhängigkeit der lokalen Regenintensität von der Lufttemperatur ist. Dieses Verfahren beinhaltet zwei Schritte: Zuerst werden großräumige Klimafelder in täglicher Auflösung durch Regression mit den Temperatur- und Niederschlagswerten der Stationen statistisch verbunden (räumliches Downscaling). Im zweiten Schritt werden dann diese Stationswerte mithilfe eines sogenannten multiplikativen stochastischen Kaskadenmodells (MC) auf eine zeitliche Auflösung von 10 Minuten disaggregiert (zeitliches Downscaling). Die neuartige, temperatursensitive Variante berücksichtigt zusätzlich die Lufttemperatur als erklärende Variable für die Niederschlagsintensitäten. Dadurch wird der mit einer Erwärmung zu erwartende höhere atmosphärische Feuchtegehalt, welcher sich aus der Clausius-Clapeyron-Beziehung (CC) ergibt, mit in das zeitliche Downscaling einbezogen.
Für die statistische Auswertung der extremen kurzfristigen Niederschläge wurden die oberen Quantile (99,9 %), Überschreitungssummen (ÜS, P > 5 mm) und 3-jährliche Wiederkehrniveaus (WN) einer Dauerstufe von ≤ 15-Minuten betrachtet. Diese Auswahl erlaubt die gleichzeitige Analyse sowohl von Extremwertstatistiken als auch von deren langfristigen Trends; leichte Abweichungen von dieser Wahl beeinflussen die Hauptergebnisse nur unwesentlich. Nur durch die Hinzunahme der Temperatur wird die beobachtete Temperaturabhängigkeit der extremen Quantile (CC-Scaling) gut wiedergegeben. Bei Vergleich von Beobachtungsdaten und Gegenwartssimulationen der Modellkaskade zeigt das temperatursensitive Verfahren konsistente Ergebnisse. Im Vergleich zu den Entwicklungen der letzten Jahrzehnte werden für die Zukunft ähnliche oder sogar noch stärkere Anstiege der extremen Niederschlagsintensitäten projiziert. Dies ist insofern bemerkenswert, als diese anscheinend hauptsächlich durch die örtliche Temperatur bestimmt werden, denn die projizierten Trends der Niederschlags-Tageswerte sind für diese Region vernachlässigbar.
Fires are a fundamental part of the Earth System. In the last decades, they have been altering ecosystem structure, biogeochemical cycles and atmospheric composition with unprecedented rapidity. In this study, we implement a complex networks-based methodology to track individual fires over space and time. We focus on extreme fires-the 5% most intense fires-in the tropical forests of the Brazilian Legal Amazon over the period 2002-2019. We analyse the interannual variability in the number and spatial patterns of extreme forest fires in years with diverse climatic conditions and anthropogenic pressure to examine potential synergies between climate and anthropogenic drivers. We observe that major droughts, that increase forest flammability, co-occur with high extreme fire years but also that it is fundamental to consider anthropogenic activities to understand the distribution of extreme fires. Deforestation fires, fires escaping from managed lands, and other types of forest degradation and fragmentation provide the ignition sources for fires to ignite in the forests. We find that all extreme forest fires identified are located within a 0.5-km distance from forest edges, and up to 56% of them are within a 1-km distance from roads (which increases to 73% within 5 km), showing a strong correlation that defines spatial patterns of extreme fires.
Relationships between climate, species composition, and species richness are of particular importance for understanding how boreal ecosystems will respond to ongoing climate change. This study aims to reconstruct changes in terrestrial vegetation composition and taxa richness during the glacial Late Pleistocene and the interglacial Holocene in the sparsely studied southeastern Yakutia (Siberia) by using pollen and sedimentary ancient DNA (sedaDNA) records. Pollen and sedaDNA metabarcoding data using the trnL g and h markers were obtained from a sediment core from Lake Bolshoe Toko. Both proxies were used to reconstruct the vegetation composition, while metabarcoding data were also used to investigate changes in plant taxa richness. The combination of pollen and sedaDNA approaches allows a robust estimation of regional and local past terrestrial vegetation composition around Bolshoe Toko during the last similar to 35,000 years. Both proxies suggest that during the Late Pleistocene, southeastern Siberia was covered by open steppe-tundra dominated by graminoids and forbs with patches of shrubs, confirming that steppe-tundra extended far south in Siberia. Both proxies show disturbance at the transition between the Late Pleistocene and the Holocene suggesting a period with scarce vegetation, changes in the hydrochemical conditions in the lake, and in sedimentation rates. Both proxies document drastic changes in vegetation composition in the early Holocene with an increased number of trees and shrubs and the appearance of new tree taxa in the lake's vicinity. The sedaDNA method suggests that the Late Pleistocene steppe-tundra vegetation supported a higher number of terrestrial plant taxa than the forested Holocene. This could be explained, for example, by the "keystone herbivore" hypothesis, which suggests that Late Pleistocene megaherbivores were able to maintain a high plant diversity. This is discussed in the light of the data with the broadly accepted species-area hypothesis as steppe-tundra covered such an extensive area during the Late Pleistocene.
The growing worldwide impact of flood events has motivated the development and application of global flood hazard models (GFHMs). These models have become useful tools for flood risk assessment and management, especially in regions where little local hazard information is available. One of the key uncertainties associated with GFHMs is the estimation of extreme flood magnitudes to generate flood hazard maps. In this study, the 1-in-100 year flood (Q100) magnitude was estimated using flow outputs from four global hydrological models (GHMs) and two global flood frequency analysis datasets for 1350 gauges across the conterminous US. The annual maximum flows of the observed and modelled timeseries of streamflow were bootstrapped to evaluate the sensitivity of the underlying data to extrapolation. Results show that there are clear spatial patterns of bias associated with each method. GHMs show a general tendency to overpredict Western US gauges and underpredict Eastern US gauges. The GloFAS and HYPE models underpredict Q100 by more than 25% in 68% and 52% of gauges, respectively. The PCR-GLOBWB and CaMa-Flood models overestimate Q100 by more than 25% at 60% and 65% of gauges in West and Central US, respectively. The global frequency analysis datasets have spatial variabilities that differ from the GHMs. We found that river basin area and topographic elevation explain some of the spatial variability in predictive performance found in this study. However, there is no single model or method that performs best everywhere, and therefore we recommend a weighted ensemble of predictions of extreme flood magnitudes should be used for large-scale flood hazard assessment.
Eine Zunahme der allgemeinen Temperatur auf Grund des Klimawandels und die damit einhergehende Zunahme von Hitzewellen führten dazu, dass das Landesamt für Umwelt und Verbraucherschutz Nordrhein-Westfalen (LANUV) einen Leitfaden für den Schutz der positiven Klimafunktion urbaner Böden herausgab. Darauf aufbauend wurde auf regionaler Ebene für die Stadt Düsseldorf die Kühlleistung der urbanen Böden quantifiziert, um besonders schutzwürdige Bereiche zu identifizieren. Im Rahmen des Projektes ExTrass sollte nun die Kühlleistung urbaner Böden innerhalb Remscheids quantifiziert werden, jedoch auf Basis von frei zugänglichen Daten. Eine solche Datengrundlage schließt eine Modellierung des Bodenwasserhaushaltes, welches die Grundlage der Quantifizierung in Düsseldorf war, für Remscheid aus. Jedoch bietet der vorgestellte Ansatz die Möglichkeit, eine solche Untersuchung auch in anderen Gemeinden innerhalb Deutschlands mit relativ wenig Aufwand durchzuführen.
Die Kühlleistung der Böden wurde über die nutzbare Feldkapazität abgeschätzt, welche das Wasserspeichervolumen der obersten durchwurzelten Bodenzone angibt. Es ist der Bodenwasserspeicher, der Wasser für die Evapotranspiration zur Verfügung stellt und damit maßgeblich die Kühlleistung eines Bodens definiert, d.h. durch direkte Evaporation des Bodenwassers sowie durch die Transpiration von Wasser durch Pflanzen. In die Erstellung der Karte sind eingegangen: (a) die Bodenkarte Nordrhein-Westfalens (BK50), um die nutzbare Feldkapazität (nFK) je Fläche zu bestimmen; (b) der Landnutzungsdatensatz UrbanAtlas 2012, in Verbindung mit einer Literaturrecherche, um den Einfluss der Landnutzung auf die Werte der nFK, insbesondere im Hinblick auf Versiegelung und Verdichtung herzuleiten; und (c) OpenStreetMap (OSM), um den Anteil der versiegelten Flächen genauer zu bestimmen, als dies auf Basis des UrbanAtlas möglich gewesen wäre.
Es hat sich gezeigt, dass dieser Ansatz geeignet ist, um die räumliche Verteilung der potenziellen Bodenkühlfunktion innerhalb einer Stadt zu untersuchen. Es ist zu beachten, dass der Einfluss des Grundwassers in Remscheid nicht berücksichtigt werden konnte. Denn es ist damit zu rechnen, dass die Grundwasserverhältnisse aufgrund der geologischen und topographischen Situation in Remscheid kleinräumig Variationen unterliegen und es somit
keinen durchgängigen und kartierten Aquifer gibt.
Kleingartenanlagen, Parks und Friedhöhe im innerstädtischen Bereich und allgemein die Landnutzungsklassen Wald und Grünland wurden als Flächen mit einem besonders hohem potenziellen Bodenkühlpotenzial identifiziert. Solche Flächen sind besonders schützenswert. Die Analyse der Speicherfüllstände der oberen Bodenzone, basierend auf der erstellten Karte der potenziellen Bodenkühlfunktion und der klimatischen Wasserbilanz, ergab, dass besonders innerstädtische Flächen, die einen kleinen Bodenwasserspeicher haben, in einem trockenen Jahr bereits früh im Sommer ihre Kühlfunktion verlieren und bei Hitzewellen somit eine verringerte positive Klimafunktion haben. Gestützt wird diese Aussage durch eine Auswertung des normalisierten differenzierten Vegetationsindex (NDVI), der genutzt wurde, um die Veränderung der Pflanzenvitalität vor und nach einer Hitzeperiode im Juni/Juli 2018 zu untersuchen.
Messungen mit Meteobikes, einer Vorrichtung, die dazu geeignet ist, während einer Radfahrt kontinuierlich die Temperatur zu messen, stützen die Erkenntnis, dass innerstädtische Grünflächen wie Parks eine positive Wirkung auf das urbane Mikroklima haben. Weiterhin zeigen diese Messungen, dass die Topographie innerhalb des Untersuchungsgebietes die Aufheizung einzelner Flächen und die Temperaturverteilung vermutlich mitbestimmt. Die hier vorgestellte Karte der potenziellen Kühlfunktion für Remscheid sollte als Ergänzung in die Klimafunktionskarte für Remscheid eingehen und den bestehenden Layer „flächenhafte Klimafunktion“, der nur die Landnutzung berücksichtigt, ersetzen.
Glacial lakes in the Hindu Kush–Karakoram–Himalayas–Nyainqentanglha (HKKHN) region have grown rapidly in number and area in past decades, and some dozens have drained in catastrophic glacial lake outburst floods (GLOFs). Estimating regional susceptibility of glacial lakes has largely relied on qualitative assessments by experts, thus motivating a more systematic and quantitative appraisal. Before the backdrop of current climate-change projections and the potential of elevation-dependent warming, an objective and regionally consistent assessment is urgently needed. We use an inventory of 3390 moraine-dammed lakes and their documented outburst history in the past four decades to test whether elevation, lake area and its rate of change, glacier-mass balance, and monsoonality are useful inputs to a probabilistic classification model. We implement these candidate predictors in four Bayesian multi-level logistic regression models to estimate the posterior susceptibility to GLOFs. We find that mostly larger lakes have been more prone to GLOFs in the past four decades regardless of the elevation band in which they occurred. We also find that including the regional average glacier-mass balance improves the model classification. In contrast, changes in lake area and monsoonality play ambiguous roles. Our study provides first quantitative evidence that GLOF susceptibility in the HKKHN scales with lake area, though less so with its dynamics. Our probabilistic prognoses offer improvement compared to a random classification based on average GLOF frequency. Yet they also reveal some major uncertainties that have remained largely unquantified previously and that challenge the applicability of single models. Ensembles of multiple models could be a viable alternative for more accurately classifying the susceptibility of moraine-dammed lakes to GLOFs.
Wildfires, as a key disturbance in forest ecosystems, are shaping the world's boreal landscapes. Changes in fire regimes are closely linked to a wide array of environmental factors, such as vegetation composition, climate change, and human activity. Arctic and boreal regions and, in particular, Siberian boreal forests are experiencing rising air and ground temperatures with the subsequent degradation of permafrost soils leading to shifts in tree cover and species composition. Compared to the boreal zones of North America or Europe, little is known about how such environmental changes might influence long-term fire regimes in Russia. The larch-dominated eastern Siberian deciduous boreal forests differ markedly from the composition of other boreal forests, yet data about past fire regimes remain sparse. Here, we present a high-resolution macroscopic charcoal record from lacustrine sediments of Lake Khamra (southwest Yakutia, Siberia) spanning the last ca. 2200 years, including information about charcoal particle sizes and morphotypes. Our results reveal a phase of increased charcoal accumulation between 600 and 900 CE, indicative of relatively high amounts of burnt biomass and high fire frequencies. This is followed by an almost 900-year-long period of low charcoal accumulation without significant peaks likely corresponding to cooler climate conditions. After 1750 CE fire frequencies and the relative amount of biomass burnt start to increase again, coinciding with a warming climate and increased anthropogenic land development after Russian colonization. In the 20th century, total charcoal accumulation decreases again to very low levels despite higher fire frequency, potentially reflecting a change in fire management strategies and/or a shift of the fire regime towards more frequent but smaller fires. A similar pattern for different charcoal morphotypes and comparison to a pollen and non-pollen palynomorph (NPP) record from the same sediment core indicate that broad-scale changes in vegetation composition were probably not a major driver of recorded fire regime changes. Instead, the fire regime of the last two millennia at Lake Khamra seems to be controlled mainly by a combination of short-term climate variability and anthropogenic fire ignition and suppression.
Knowing the source and runout of debris flows can help in planning strategies aimed at mitigating these hazards. Our research in this paper focuses on developing a novel approach for optimizing runout models for regional susceptibility modelling, with a case study in the upper Maipo River basin in the Andes of Santiago, Chile. We propose a two-stage optimization approach for automatically selecting parameters for estimating runout path and distance. This approach optimizes the random-walk and Perla et al.'s (PCM) two-parameter friction model components of the open-source Gravitational Process Path (GPP) modelling framework. To validate model performance, we assess the spatial transferability of the optimized runout model using spatial crossvalidation, including exploring the model's sensitivity to sample size. We also present diagnostic tools for visualizing uncertainties in parameter selection and model performance. Although there was considerable variation in optimal parameters for individual events, we found our runout modelling approach performed well at regional prediction of potential runout areas. We also found that although a relatively small sample size was sufficient to achieve generally good runout modelling performance, larger samples sizes (i.e. >= 80) had higher model performance and lower uncertainties for estimating runout distances at unknown locations. We anticipate that this automated approach using the open-source R software and the System for Automated Geoscientific Analyses geographic information system (SAGA-GIS) will make process-based debris-flow models more readily accessible and thus enable researchers and spatial planners to improve regional-scale hazard assessments.
Durch das anhaltende Rückschmelzen von Gletschern werden mehr Sedimentdepots freigesetzt, wodurch diese anfälliger für Erosion werden. Erhöhte Sedimentaustragsraten gefährden die Wasserqualität sowie die Wasserversorgung durch Stauraumverlandung. Um diese Gefahren und deren Abläufe besser verstehen zu können, müssen Erosionsprozesse vor allem in hochalpinen Einzugsgebieten erforscht werden. In dieser Bachelorarbeit wurden Sedimentkonzentrationen sowie weitere Umgebungsvariablen (Abfluss, Niederschlag und Temperatur) im Rofental, Ötztaler Alpen und in einem stark vergletscherten Teileinzugsgebiet des Rofentals gemessen. Um den Zusammenhang zwischen der Sedimentkonzentration und den gemessenen Umgebungsbedingungen zu ermitteln, wurde das Quantile Regression Forest Modell verwendet. Dabei wurden die Variablen zu unterschiedlichen Zeitstufen aggregiert, wodurch vergangene hydroklimatische Bedingungen berücksichtigt werden konnten. Mit der Kenntnis über den Einfluss der verschiedenen Einflussfaktoren konnte die Sedimentkonzentration rückwirkend mithilfe eines Monte Carlo Ansatzes kontinuierlich modelliert werden, wodurch Aussagen über die jährlichen Sedimentexportraten getätigt werden konnten. Weiterhin wurde auch die Trübung, welche als Indikator für die Sedimentkonzentration angesehen werden kann, gemessen. Durch die Bestimmung der Korrelation zwischen modellierten Daten und der gemessenen Trübung konnte der Aussagegehalt des Modells beurteilt werden. Es konnte gezeigt werden, dass das Quantile Regression Forest Modell geeignet ist, um die Sedimentdynamik im Rofental zu rekonstruieren. Es stellte sich weiterhin heraus, dass der Abfluss in beiden Untersuchungsgebieten den größten Einfluss auf die Sedimentdynamik hat, wobei sich die Relevanz verschiedener Variablen in beiden Untersuchungsgebieten stark unterschied. Gemessene Trübungsdaten und die modellierten Sedimentkonzentrationen korrelierten stark positiv, wobei Murgänge, Messfehler und die Anzapfung neuer Sedimentdepots zur Verschlechterung der Modellgüte führten.
Developing countries are increasingly impacted by floods, especially in Asia. Traditional flood risk man-agement, using structural measures such as levees, can have negative impacts on the livelihoods of social groups that are more vulnerable. Ecosystem-based adaptation (EbA) provides a complementary approach that is potentially more inclusive of groups that are commonly described as more vulnerable, such as the poor and women. However, there is a lack of disaggregated and quantitative information on the potential of EbA to support vulnerable groups of society. This paper provides a quantitative analysis of the differ-ences in vulnerability to flooding as well as preferences for EbA benefits across income groups and gen -der. We use data collected through a survey of households in urban and rural Central Vietnam which included a discrete choice experiment on preferences for ecosystem services. A total of 1,010 households was surveyed during 2017 through a random sampling approach. Preferences are measured in monetary and non-monetary terms to avoid issues that may arise from financial constraints faced by respondents and especially the more vulnerable groups. Our results reveal that lower income households and women are overall more vulnerable than their counterparts and have stronger preferences for the majority of the EbA benefits, including flood protection, seafood abundance, tourism, and recreation suitability. These findings strongly indicate that EbA is indeed a promising tool to support groups of society that are espe-cially vulnerable to floods. These results provide crucial insights for future implementation of EbA pro-jects and for the integration of EbA with goals targeted at complying with the Sendai Framework and Sustainable Development Goals. (c) 2021 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
Heat waves are increasingly common in many countries across the globe, and also in Germany, where this study is set. Heat poses severe health risks, especially for vulnerable groups such as the elderly and children. This case study explores visitors' behavior and perceptions during six weekends in the summer of 2018 at a 6-month open-air horticultural show. Data from a face-to-face survey (n = 306) and behavioral observations ( n = 2750) were examined by using correlation analyses, ANOVA, and multiple regression analyses. Differences in weather perception, risk awareness, adaptive behavior, and activity level were observed between rainy days (maximum daily temperature, 25 degrees C), warmsummer days (25 degrees-30 degrees C), and hot days (>30 degrees C). Respondents reported a high level of heat risk awareness, butmost (90%) were unaware of actual heat warnings. During hot days, more adaptive measures were reported and observed. Older respondents reported taking the highest number of adaptive measures. We observed the highest level of adaptation in children, but they also showed the highest activity level. From our results we discuss how to facilitate individual adaptation to heat stress at open-air events by taking the heterogeneity of visitors into account. To mitigate negative health outcomes for citizens in the future, we argue for tailored risk communication aimed at vulnerable groups. <br /> SIGNIFICANCE STATEMENT: People around the world are facing higher average temperatures. While higher temperatures make open-air events a popular leisure time activity in summer, heat waves are a threat to health and life. Since there is not much research on how visitors of such events perceive different weather conditions-especially hot temperatures-we explored this in our case study in southern Germany at an open-air horticultural show in the summer of 2018. We discovered deficits both in people's awareness of current heat risk and the heat adaptation they carry out themselves. Future research should further investigate risk perception and adaptation behavior of private individuals, whereas event organizers and authorities need to continually focus on risk communication and facilitate individual adaptation of their visitors.
Cosmic-ray neutron sensing (CRNS) is a powerful technique for retrieving representative estimates of soil water content at a horizontal scale of hectometres (the “field scale”) and depths of tens of centimetres (“the root zone”). This study demonstrates the potential of the CRNS technique to obtain spatio-temporal patterns of soil moisture beyond the integrated volume from isolated CRNS footprints. We use data from an observational campaign carried out between May and July 2019 that featured a dense network of more than 20 neutron detectors with partly overlapping footprints in an area that exhibits pronounced soil moisture gradients within one square kilometre. The present study is the first to combine these observations in order to represent the heterogeneity of soil water content at the sub-footprint scale as well as between the CRNS stations. First, we apply a state-of-the-art procedure to correct the observed neutron count rates for static effects (heterogeneity in space, e.g. soil organic matter) and dynamic effects (heterogeneity in time, e.g. barometric pressure). Based on the homogenized neutron data, we investigate the robustness of a calibration approach that uses a single calibration parameter across all CRNS stations. Finally, we benchmark two different interpolation techniques for obtaining spatio-temporal representations of soil moisture: first, ordinary Kriging with a fixed range; second, spatial interpolation complemented by geophysical inversion (“constrained interpolation”). To that end, we optimize the parameters of a geostatistical interpolation model so that the error in the forward-simulated neutron count rates is minimized, and suggest a heuristic forward operator to make the optimization problem computationally feasible. Comparison with independent measurements from a cluster of soil moisture sensors (SoilNet) shows that the constrained interpolation approach is superior for representing horizontal soil moisture gradients at the hectometre scale. The study demonstrates how a CRNS network can be used to generate coherent, consistent, and continuous soil moisture patterns that could be used to validate hydrological models or remote sensing products.
Cosmic-ray neutron sensing (CRNS) is a powerful technique for retrieving representative estimates of soil water content at a horizontal scale of hectometres (the “field scale”) and depths of tens of centimetres (“the root zone”). This study demonstrates the potential of the CRNS technique to obtain spatio-temporal patterns of soil moisture beyond the integrated volume from isolated CRNS footprints. We use data from an observational campaign carried out between May and July 2019 that featured a dense network of more than 20 neutron detectors with partly overlapping footprints in an area that exhibits pronounced soil moisture gradients within one square kilometre. The present study is the first to combine these observations in order to represent the heterogeneity of soil water content at the sub-footprint scale as well as between the CRNS stations. First, we apply a state-of-the-art procedure to correct the observed neutron count rates for static effects (heterogeneity in space, e.g. soil organic matter) and dynamic effects (heterogeneity in time, e.g. barometric pressure). Based on the homogenized neutron data, we investigate the robustness of a calibration approach that uses a single calibration parameter across all CRNS stations. Finally, we benchmark two different interpolation techniques for obtaining spatio-temporal representations of soil moisture: first, ordinary Kriging with a fixed range; second, spatial interpolation complemented by geophysical inversion (“constrained interpolation”). To that end, we optimize the parameters of a geostatistical interpolation model so that the error in the forward-simulated neutron count rates is minimized, and suggest a heuristic forward operator to make the optimization problem computationally feasible. Comparison with independent measurements from a cluster of soil moisture sensors (SoilNet) shows that the constrained interpolation approach is superior for representing horizontal soil moisture gradients at the hectometre scale. The study demonstrates how a CRNS network can be used to generate coherent, consistent, and continuous soil moisture patterns that could be used to validate hydrological models or remote sensing products.
Sudden glacier advances in the Cachapoal Valley, Southern Central Andes of Chile (34 degrees S)
(2021)
Throughout the Andes Mountains of South America, a general trend of glacier shrinkage has taken place in modern times. However, a few glaciers have undergone considerable temporally advances or even surged during the mid-19th to 20th century CE. These valley glaciers are mainly located in the Central Andes of Chile and Argentina. The research presented here focuses on the changes of the Cachapoal Glacier in the Southern Central Andes of Chile. Spectacular glacier advances occurred at least three times in historical times, which lead to river blockages and successive lake outburst floods. The glacier advances were reconstructed with a multi-method approach including geomorphological mapping, Be-10 cosmogenic exposure dating of moraines, multi-temporal comparison of historical and recent photographs and paintings as well as the interpretation of aerial photographs and satellite images and the analysis of early travel reports. The article highlights the diversity of environmental conditions for the formation of glaciers in terms of the topographical and climatic setting and the resulting distinct glacier behavior along the Andes Mountains. It is argued for the Cachapoal Glacier that the glacier advances are intrinsic to the glacier type and may not be necessarily climate-dependent. This is characteristic for avalanche-fed glaciers of which the glacier dynamic is strongly controlled by the topographic setting and sudden inputs of ice and rock avalanches as well as by the specific debris transfer system and hydrological drainage pattern. At the regional level, the fluctuations of the Cachapoal Glacier are compared with glaciers of neighboring mountain ranges in the Southern Central Andes and at the global scale with those of the Karakoram Mountains in High Asia with a similar dynamic glacier behavior.
Rivers play a relevant role in the nutrient turnover during the transport from land to ocean. Here, highly dynamic planktonic processes are more important compared to streams making it necessary to link the dynamics of nutrient turnover to control mechanisms of phytoplankton. We investigated the basic conditions leading to high phytoplankton biomass and corresponding nutrient dynamics in eutrophic, 8th order River Elbe (Germany). In a first step, we performed six Lagrangian sampling campaigns in the lower river section at different hydrological conditions. While nutrient concentrations remained high at low algal densities in autumn and at moderate discharge in summer, high algal concentrations occurred at low discharge in summer. Under these conditions, concentrations of silica and nitrate decreased and rates of nitrate assimilation were high. Soluble reactive phosphorus was depleted and particulate phosphorus increased inversely. Rising molar C:P ratios of seston indicated a phosphorus limitation of phytoplankton, so far rarely observed in eutrophic large rivers. Global radiation combined with mixing depth had a strong predictive power to explain maximum chlorophyll concentration. In a second step, we estimated nutrient turnover exemplarily for N during the campaign with the lowest discharge based on mass balances and metabolism-based process measurements. Mass balance calculations revealed a total nitrate uptake of 423 mg N m(-2)d(-1). Increasing phytoplankton density dominantly explained whole river gross primary production and related assimilatory nutrient uptake. In conclusion, riverine nutrient uptake strongly depends on the growth conditions for phytoplankton, which are favored at high irradiation and low discharge.
Detecting whether and how river discharge responds to strong earthquake shaking can be time-consuming and prone to operator bias when checking hydrographs from hundreds of gauging stations. We use Bayesian piecewise regression models to show that up to a fifth of all gauging stations across Chile had their largest change in daily streamflow trend on the day of the M-w 8.8 Maule earthquake in 2010. These stations cluster distinctly in the near field though the number of detected streamflow changes varies with model complexity and length of time window considered. Credible seismic streamflow changes at several stations were the highest detectable in eight months, with an increased variance of discharge surpassing the variance of discharge following rainstorms. We conclude that Bayesian piecewise regression sheds new and unbiased insights on the duration, trend, and variance of streamflow response to strong earthquakes, and on how this response compares to that following rainstorms.
Over the past decades, natural hazards, many of which are aggravated by climate change and reveal an increasing trend in frequency and intensity, have caused significant human and economic losses and pose a considerable obstacle to sustainable development. Hence, dedicated action toward disaster risk reduction is needed to understand the underlying drivers and create efficient risk mitigation plans. Such action is requested by the Sendai Framework for Disaster Risk Reduction 2015-2030 (SFDRR), a global agreement launched in 2015 that establishes stating priorities for action, e.g. an improved understanding of disaster risk. Turkey is one of the SFDRR contracting countries and has been severely affected by many natural hazards, in particular earthquakes and floods. However, disproportionately little is known about flood hazards and risks in Turkey. Therefore, this thesis aims to carry out a comprehensive analysis of flood hazards for the first time in Turkey from triggering drivers to impacts. It is intended to contribute to a better understanding of flood risks, improvements of flood risk mitigation and the facilitated monitoring of progress and achievements while implementing the SFDRR.
In order to investigate the occurrence and severity of flooding in comparison to other natural hazards in Turkey and provide an overview of the temporal and spatial distribution of flood losses, the Turkey Disaster Database (TABB) was examined for the years 1960-2014. The TABB database was reviewed through comparison with the Emergency Events Database (EM-DAT), the Dartmouth Flood Observatory database, the scientific literature and news archives. In addition, data on the most severe flood events between 1960 and 2014 were retrieved. These served as a basis for analyzing triggering mechanisms (i.e. atmospheric circulation and precipitation amounts) and aggravating pathways (i.e. topographic features, catchment size, land use types and soil properties). For this, a new approach was developed and the events were classified using hierarchical cluster analyses to identify the main influencing factor per event and provide additional information about the dominant flood pathways for severe floods. The main idea of the study was to start with the event impacts based on a bottom-up approach and identify the causes that created damaging events, instead of applying a model chain with long-term series as input and searching for potentially impacting events as model outcomes. However, within the frequency analysis of the flood-triggering circulation pattern types, it was discovered that events in terms of heavy precipitation were not included in the list of most severe floods, i.e. their impacts were not recorded in national and international loss databases but were mentioned in news archives and reported by the Turkish State Meteorological Service. This finding challenges bottom-up modelling approaches and underlines the urgent need for consistent event and loss documentation. Therefore, as a next step, the aim was to enhance the flood loss documentation by calibrating, validating and applying the United Nations Office for Disaster Risk Reduction (UNDRR) loss estimation method for the recent severe flood events (2015-2020). This provided, a consistent flood loss estimation model for Turkey, allowing governments to estimate losses as quickly as possible after events, e.g. to better coordinate financial aid.
This thesis reveals that, after earthquakes, floods have the second most destructive effects in Turkey in terms of human and economic impacts, with over 800 fatalities and US$ 885.7 million in economic losses between 1960 and 2020, and that more attention should be paid on the national scale. The clustering results of the dominant flood-producing mechanisms (e.g. circulation pattern types, extreme rainfall, sudden snowmelt) present crucial information regarding the source and pathway identification, which can be used as base information for hazard identification in the preliminary risk assessment process. The implementation of the UNDRR loss estimation model shows that the model with country-specific parameters, calibrated damage ratios and sufficient event documentation (i.e. physically damaged units) can be recommended in order to provide first estimates of the magnitude of direct economic losses, even shortly after events have occurred, since it performed well when estimates were compared to documented losses.
The presented results can contribute to improving the national disaster loss database in Turkey and thus enable a better monitoring of the national progress and achievements with regard to the targets stated by the SFDRR. In addition, the outcomes can be used to better characterize and classify flood events. Information on the main underlying factors and aggravating flood pathways further supports the selection of suitable risk reduction policies.
All input variables used in this thesis were obtained from publicly available data. The results are openly accessible and can be used for further research.
As an overall conclusion, it can be stated that consistent loss data collection and better event documentation should gain more attention for a reliable monitoring of the implementation of the SFDRR. Better event documentation should be established according to a globally accepted standard for disaster classification and loss estimation in Turkey. Ultimately, this enables stakeholders to create better risk mitigation actions based on clear hazard definitions, flood event classification and consistent loss estimations.
Over the past decades, floods have caused significant financial losses in Turkey, amounting to US$ 800 million between 1960 and 2014. With the Sendai Framework for Disaster Risk Reduction 2015-2030 (SFDRR), it is aimed to reduce the direct economic loss from disasters in relation to the global gross domestic product (GDP) by 2030. Accordingly, a methodology based on experiences from developing countries was proposed by the United Nations Office for Disaster Risk Reduction (UNDRR) to estimate direct economic losses on the macro-scale. Since Turkey also signed the SFDRR, we aimed to adapt, validate and apply the loss estimation model proposed by the UNDRR in Turkey for the first time. To do so, the well-documented flood event in Mersin of 2016 was used to calibrate the damage ratios for the agricultural, commercial and residential sectors, as well as educational facilities. Case studies between 2015 and 2020 with documented losses were further used to validate the model. Finally, model applications provided initial loss estimates for floods occurred recently in Turkey. Despite the limited event documentation for each sector, the calibrated model yielded good results when compared to documented losses. Thus, by implementing the UNDRR method, this study provides an approach to estimate the direct economic losses in Turkey on the macro-scale, which can be used to fill gaps in event databases, support the coordination of financial aid after flood events and facilitate monitoring of the progress toward and achievement of Global Target C of the Sendai Framework for Disaster Risk Reduction 2015-2030.
When inferring on the magnitude of future heat-related mortality due to climate change, human adaptation to heat should be accounted for. We model long-term changes in minimum mortality temperatures (MMT), a well-established metric denoting the lowest risk of heat-related mortality, as a function of climate change and socio-economic progress across 3820 cities. Depending on the combination of climate trajectories and socio-economic pathways evaluated, by 2100 the risk to human health is expected to decline in 60% to 80% of the cities against contemporary conditions. This is caused by an average global increase in MMTs driven by long-term human acclimatisation to future climatic conditions and economic development of countries. While our adaptation model suggests that negative effects on health from global warming can broadly be kept in check, the trade-offs are highly contingent to the scenario path and location-specific. For high-forcing climate scenarios (e.g. RCP8.5) the maintenance of uninterrupted high economic growth by 2100 is a hard requirement to increase MMTs and level-off the negative health effects from additional scenario-driven heat exposure. Choosing a 2 degrees C-compatible climate trajectory alleviates the dependence on fast growth, leaving room for a sustainable economy, and leads to higher reductions of mortality risk.
Die Diskussion um Postwachstumsprozesse hat die kleinen, früher unbeachtet gebliebenen Orte der Innovation entdeckt. Ungeplant und unkoordiniert entstandene Produktions- und Arbeitsformen wie zum Beispiel Fab Labs, Offene Werkstätten, Reallabore, Techshops, Repair Cafés und andere entziehen sich weitgehend den gewohnten Erklärungs- und Beschreibungskategorien der sozialwissenschaftlichen Forschung. Die Komplexität ihrer Erscheinungsformen, ihre heterogene Verursachung, ihre kontingente Weiterentwicklung und ihre hybriden Arbeitsprozesse erfordern ergebnisoffene analytische Rekonstruktionen. Das Ziel dieses Beitrags ist es, auf der Basis praxisnaher Tätigkeitsbeschreibungen jeweils Prozesse der Raumkontextualisierung und -zuschreibung zu rekonstruieren. Dies geschieht auf der Basis der leitenden Frage, inwieweit neue Arbeitsformen mit spezifischen Raumbezügen einhergehen und eine differenzierte Sicht auf unterschiedliche Prozesse der Ortsbildung erforderlich machen. Als analytischer Referenzfall werden Offene Werkstätten und die in ihnen vorherrschenden Arbeitsformen genauer betrachtet.
In the following article, the focus is on the transformative potentials created by so-called persistence avant-gardes and prevention innovators. The text extends Bluhdorn's guiding concept of narratives of hope (Bluhdorn 2017; Bluhdorn and Butzlaff 2019) by considering those groups that are marginalized within debates on socio-ecological transformation. With a closer look at the narratives of prevention and blockade that these actors engage, the ambiguous nature of postgrowth avant-gardes is carved out. Their discursive, argumentative, and effective inhibition of transitory policies is interpreted as a pro-active potential, rather than a mere obstacle to socio-ecological transformation. Adding a geographical perspective, the paper pleads for a more precise theoretical penetration of the ambivalent figure of avantgardes when analyzing processes of local and regional postgrowth.
Singularity cities
(2021)
We propose an upgraded gravitational model which provides population counts beyond the binary (urban/non-urban) city simulations. Numerically studying the model output, we find that the radial population density gradients follow power-laws where the exponent is related to the preset gravity exponent gamma. Similarly, the urban fraction decays exponentially, again determined by gamma. The population density gradient can be related to radial fractality and it turns out that the typical exponents imply that cities are basically zero-dimensional. Increasing the gravity exponent leads to extreme compactness and the loss of radial symmetry. We study the shape of the major central cluster by means of another three fractal dimensions and find that overall its fractality is dominated by the size and the influence of gamma is minor. The fundamental allometry, between population and area of the major central cluster, is related to the gravity exponent but restricted to the case of higher densities in large cities. We argue that cities are shaped by power-law proximity. We complement the numerical analysis by economics arguments employing travel costs as well as housing rent determined by supply and demand. Our work contributes to the understanding of gravitational effects, radial gradients, and urban morphology. The model allows to generate and investigate city structures under laboratory conditions.
Air pollution is a pressing issue that is associated with adverse effects on human health, ecosystems, and climate. Despite many years of effort to improve air quality, nitrogen dioxide (NO2) limit values are still regularly exceeded in Europe, particularly in cities and along streets. This study explores how concentrations of nitrogen oxides (NOx = NO + NO2) in European urban areas have changed over the last decades and how this relates to changes in emissions. To do so, the incremental approach was used, comparing urban increments (i.e. urban background minus rural concentrations) to total emissions, and roadside increments (i.e. urban roadside concentrations minus urban background concentrations) to traffic emissions. In total, nine European cities were assessed. The study revealed that potentially confounding factors like the impact of urban pollution at rural monitoring sites through atmospheric transport are generally negligible for NOx. The approach proves therefore particularly useful for this pollutant. The estimated urban increments all showed downward trends, and for the majority of the cities the trends aligned well with the total emissions. However, it was found that factors like a very densely populated surrounding or local emission sources in the rural area such as shipping traffic on inland waterways restrict the application of the approach for some cities. The roadside increments showed an overall very diverse picture in their absolute values and trends and also in their relation to traffic emissions. This variability and the discrepancies between roadside increments and emissions could be attributed to a combination of local influencing factors at the street level and different aspects introducing inaccuracies to the trends of the emis-sion inventories used, including deficient emission factors. Applying the incremental approach was evaluated as useful for long-term pan-European studies, but at the same time it was found to be restricted to certain regions and cities due to data availability issues. The results also highlight that using emission inventories for the prediction of future health impacts and compliance with limit values needs to consider the distinct variability in the concentrations not only across but also within cities.
Many researchers and politicians believe that the COVID-19 crisis may have opened a "window of opportunity " to spur sustainability transformations. Still, evidence for such a dynamic is currently lacking. Here, we propose the linkage of "big data " and "thick data " methods for monitoring debates on transformation processes by following the COVID-19 discourse on ecological sustainability in Germany. We analysed variations in the topics discussed by applying text mining techniques to a corpus with 84,500 newspaper articles published during the first COVID-19 wave. This allowed us to attain a unique and previously inaccessible "bird's eye view " of how these topics evolved. To deepen our understanding of prominent frames, a qualitative content analysis was undertaken. Furthermore, we investigated public awareness by analysing online search behaviour. The findings show an underrepresentation of sustainability topics in the German news during the early stages of the crisis. Similarly, public awareness regarding climate change was found to be reduced. Nevertheless, by examining the newspaper data in detail, we found that the pandemic is often seen as a chance for sustainability transformations-but not without a set of challenges. Our mixed-methods approach enabled us to bridge knowledge gaps between qualitative and quantitative research by "thickening " and providing context to data-driven analyses. By monitoring whether or not the current crisis is seen as a chance for sustainability transformations, we provide insights for environmental policy in times of crisis.
Spatiotemporal variations of key air pollutants and greenhouse gases in the Himalayan foothills
(2021)
South Asia is a rapidly developing, densely populated and highly polluted region that is facing the impacts of increasing air pollution and climate change, and yet it remains one of the least studied regions of the world scientifically. In recognition of this situation, this thesis focuses on studying (i) the spatial and temporal variation of key greenhouse gases (CO2 and CH4) and air pollutants (CO and O3) and (ii) the vertical distribution of air pollutants (PM, BC) in the foothills of the Himalaya. Five sites were selected in the Kathmandu Valley, the capital region of Nepal, along with two sites outside of the valley in the Makawanpur and Kaski districts, and conducted measurements during the period of 2013-2014 and 2016. These measurements are analyzed in this thesis.
The CO measurements at multiple sites in the Kathmandu Valley showed a clear diurnal cycle: morning and evening levels were high, with an afternoon dip. There are slight differences in the diurnal cycles of CO2 and CH4, with the CO2 and CH4 mixing ratios increasing after the afternoon dip, until the morning peak the next day. The mixing layer height (MLH) of the nocturnal stable layer is relatively constant (~ 200 m) during the night, after which it transitions to a convective mixing layer during the day and the MLH increases up to 1200 m in the afternoon. Pollutants are thus largely trapped in the valley from the evening until sunrise the following day, and the concentration of pollutants increases due to emissions during the night. During afternoon, the pollutants are diluted due to the circulation by the valley winds after the break-up of the mixing layer. The major emission sources of GHGs and air pollutants in the valley are transport sector, residential cooking, brick kilns, trash burning, and agro-residue burning. Brick industries are influential in the winter and pre-monsoon season. The contribution of regional forest fires and agro-residue burning are seen during the pre-monsoon season. In addition, relatively higher CO values were also observed at the valley outskirts (Bhimdhunga and Naikhandi), which indicates the contribution of regional emission sources. This was also supported by the presence of higher concentrations of O3 during the pre-monsoon season.
The mixing ratios of CO2 (419.3 ±6.0 ppm) and CH4 (2.192 ±0.066 ppm) in the valley were much higher than at background sites, including the Mauna Loa observatory (CO2: 396.8 ± 2.0 ppm, CH4:1.831 ± 0.110 ppm) and Waligaun (CO2: 397.7 ± 3.6 ppm, CH4: 1.879 ± 0.009 ppm), China, as well as at an urban site Shadnagar (CH4: 1.92 ± 0.07 ppm) in India.
The daily 8 hour maximum O3 average in the Kathmandu Valley exceeds the WHO recommended value during more than 80% of the days during the pre-monsoon period, which represents a significant risk for human health and ecosystems in the region. Moreover, in the measurements of the vertical distribution of particulate matter, which were made using an ultralight aircraft, and are the first of their kind in the region, an elevated polluted layer at around ca. 3000 m asl. was detected over the Pokhara Valley. The layer could be associated with the large-scale regional transport of pollution. These contributions towards understanding the distributions of key air pollutants and their main sources will provide helpful information for developing management plans and policies to help reduce the risks for the millions of people living in the region.
Global flood models (GFMs) are increasingly being used to estimate global-scale societal and economic risks of river flooding. Recent validation studies have highlighted substantial differences in performance between GFMs and between validation sites. However, it has not been systematically quantified to what extent the choice of the underlying climate forcing and global hydrological model (GHM) influence flood model performance. Here, we investigate this sensitivity by comparing simulated flood extent to satellite imagery of past flood events, for an ensemble of three climate reanalyses and 11 GHMs. We study eight historical flood events spread over four continents and various climate zones. For most regions, the simulated inundation extent is relatively insensitive to the choice of GHM. For some events, however, individual GHMs lead to much lower agreement with observations than the others, mostly resulting from an overestimation of inundated areas. Two of the climate forcings show very similar results, while with the third, differences between GHMs become more pronounced. We further show that when flood protection standards are accounted for, many models underestimate flood extent, pointing to deficiencies in their flood frequency distribution. Our study guides future applications of these models, and highlights regions and models where targeted improvements might yield the largest performance gains.
Flooding is a vast problem in many parts of the world, including Europe. It occurs mainly due to extreme weather conditions (e.g. heavy rainfall and snowmelt) and the consequences of flood events can be devastating. Flood risk is mainly defined as a combination of the probability of an event and its potential adverse impacts. Therefore, it covers three major dynamic components: hazard (physical characteristics of a flood event), exposure (people and their physical environment that being exposed to flood), and vulnerability (the elements at risk). Floods are natural phenomena and cannot be fully prevented. However, their risk can be managed and mitigated. For a sound flood risk management and mitigation, a proper risk assessment is needed. First of all, this is attained by a clear understanding of the flood risk dynamics. For instance, human activity may contribute to an increase in flood risk. Anthropogenic climate change causes higher intensity of rainfall and sea level rise and therefore an increase in scale and frequency of the flood events. On the other hand, inappropriate management of risk and structural protection measures may not be very effective for risk reduction. Additionally, due to the growth of number of assets and people within the flood-prone areas, risk increases. To address these issues, the first objective of this thesis is to perform a sensitivity analysis to understand the impacts of changes in each flood risk component on overall risk and further their mutual interactions. A multitude of changes along the risk chain are simulated by regional flood model (RFM) where all processes from atmosphere through catchment and river system to damage mechanisms are taken into consideration. The impacts of changes in risk components are explored by plausible change scenarios for the mesoscale Mulde catchment (sub-basin of the Elbe) in Germany.
A proper risk assessment is ensured by the reasonable representation of the real-world flood event. Traditionally, flood risk is assessed by assuming homogeneous return periods of flood peaks throughout the considered catchment. However, in reality, flood events are spatially heterogeneous and therefore traditional assumption misestimates flood risk especially for large regions. In this thesis, two different studies investigate the importance of spatial dependence in large scale flood risk assessment for different spatial scales. In the first one, the “real” spatial dependence of return period of flood damages is represented by continuous risk modelling approach where spatially coherent patterns of hydrological and meteorological controls (i.e. soil moisture and weather patterns) are included. Further the risk estimations under this modelled dependence assumption are compared with two other assumptions on the spatial dependence of return periods of flood damages: complete dependence (homogeneous return periods) and independence (randomly generated heterogeneous return periods) for the Elbe catchment in Germany. The second study represents the “real” spatial dependence by multivariate dependence models. Similar to the first study, the three different assumptions on the spatial dependence of return periods of flood damages are compared, but at national (United Kingdom and Germany) and continental (Europe) scales. Furthermore, the impacts of the different models, tail dependence, and the structural flood protection level on the flood risk under different spatial dependence assumptions are investigated.
The outcomes of the sensitivity analysis framework suggest that flood risk can vary dramatically as a result of possible change scenarios. The risk components that have not received much attention (e.g. changes in dike systems and in vulnerability) may mask the influence of climate change that is often investigated component.
The results of the spatial dependence research in this thesis further show that the damage under the false assumption of complete dependence is 100 % larger than the damage under the modelled dependence assumption, for the events with return periods greater than approximately 200 years in the Elbe catchment. The complete dependence assumption overestimates the 200-year flood damage, a benchmark indicator for the insurance industry, by 139 %, 188 % and 246 % for the UK, Germany and Europe, respectively. The misestimation of risk under different assumptions can vary from upstream to downstream of the catchment. Besides, tail dependence in the model and flood protection level in the catchments can affect the risk estimation and the differences between different spatial dependence assumptions.
In conclusion, the broader consideration of the risk components, which possibly affect the flood risk in a comprehensive way, and the consideration of the spatial dependence of flood return periods are strongly recommended for a better understanding of flood risk and consequently for a sound flood risk management and mitigation.
High-performance numerical codes are an indispensable tool for hydrogeologists when modeling subsurface flow and transport systems. But as they are written in compiled languages, like C/C++ or Fortran, established software packages are rarely user-friendly, limiting a wider adoption of such tools. OpenGeoSys (OGS), an open-source, finite-element solver for thermo-hydro-mechanical-chemical processes in porous and fractured media, is no exception. Graphical user interfaces may increase usability, but do so at a dramatic reduction of flexibility and are difficult or impossible to integrate into a larger workflow. Python offers an optimal trade-off between these goals by providing a highly flexible, yet comparatively user-friendly environment for software applications. Hence, we introduceogs5py, a Python-API for the OpenGeoSys 5 scientific modeling package. It provides a fully Python-based representation of an OGS project, a large array of convenience functions for users to interact with OGS and connects OGS to the scientific and computational environment of Python.
The spread of antibiotic-resistant bacteria poses a globally increasing threat to public health care. The excessive use of antibiotics in animal husbandry can develop resistances in the stables. Transmission through direct contact with animals and contamination of food has already been proven. The excrements of the animals combined with a binding material enable a further potential path of spread into the environment, if they are used as organic manure in agricultural landscapes. As most of the airborne bacteria are attached to particulate matter, the focus of the work will be the atmospheric dispersal via the dust fraction.
Field measurements on arable lands in Brandenburg, Germany and wind erosion studies in a wind tunnel were conducted to investigate the risk of a potential atmospheric dust-associated spread of antibiotic-resistant bacteria from poultry manure fertilized agricultural soils. The focus was to (i) characterize the conditions for aerosolization and (ii) qualify and quantify dust emissions during agricultural operations and wind erosion.
PM10 (PM, particulate matter with an aerodynamic diameter smaller than 10 µm) emission factors and bacterial fluxes for poultry manure application and incorporation have not been previously reported before. The contribution to dust emissions depends on the water content of the manure, which is affected by the manure pretreatment (fresh, composted, stored, dried), as well as by the intensity of manure spreading from the manure spreader. During poultry manure application, PM10 emission ranged between 0.05 kg ha-1 and 8.37 kg ha-1. For comparison, the subsequent land preparation contributes to 0.35 – 1.15 kg ha-1 of PM10 emissions. Manure particles were still part of dust emissions but they were accounted to be less than 1% of total PM10 emissions due to the dilution of poultry manure in the soil after manure incorporation. Bacterial emissions of fecal origin were more relevant during manure application than during the subsequent manure incorporation, although PM10 emissions of manure incorporation were larger than PM10 emissions of manure application for the non-dried manure variants.
Wind erosion leads to preferred detachment of manure particles from sandy soils, when poultry manure has been recently incorporated. Sorting effects were determined between the low-density organic particles of manure origin and the soil particles of mineral origin close above the threshold of 7 m s-1. In dependence to the wind speed, potential erosion rates between 101 and 854 kg ha-1 were identified, if 6 t ha-1 of poultry manure were applied. Microbial investigation showed that manure bacteria got detached more easily from the soil surface during wind erosion, due to their attachment on manure particles.
Although antibiotic-resistant bacteria (ESBL-producing E. coli) were still found in the poultry barns, no further contamination could be detected with them in the manure, fertilized soils or in the dust generated by manure application, land preparation or wind erosion. Parallel studies of this project showed that storage of poultry manure for a few days (36 – 72 h) is sufficient to inactivate ESBL-producing E. coli. Further antibiotic-resistant bacteria, i.e. MRSA and VRE, were only found sporadically in the stables and not at all in the dust. Therefore, based on the results of this work, the risk of a potential infection by dust-associated antibiotic-resistant bacteria can be considered as low.
Floodplains are threatened ecosystems and are not only ecologically meaningful but also important for humans by creating multiple benefits. Many underlying functions, like nutrient retention, carbon sequestration or water regulation, strongly depend on regular inundation. So far, these are approached on the basis of what are called ‘active floodplains’. Active floodplains, defined as statistically inundated once every 100 years, represent less than 10% of a floodplain’s original size. Still, should this remaining area be considered as one homogenous surface in terms of floodplain function, or are there any alternative approaches to quantify ecologically active floodplains? With the European Flood Hazard Maps, the extent of not only medium floods (T-medium) but also frequent floods (T-frequent) needs to be modelled by all member states of the European Union. For large German rivers, both scenarios were compared to quantify the extent, as well as selected indicators for naturalness derived from inundation. It is assumed that the more naturalness there is, the more inundation and the better the functioning. Real inundation was quantified using measured discharges from relevant gauges over the past 20 years. As a result, land uses indicating strong human impacts changed significantly from T-frequent to T-medium floodplains. Furthermore, the extent, water depth and water volume stored in the T-frequent and T-medium floodplains is significantly different. Even T-frequent floodplains experienced inundation for only half of the considered gauges during the past 20 years. This study gives evidence for considering regulation functions on the basis of ecologically active floodplains, meaning in floodplains with more frequent inundation that T-medium floodplains delineate.
Floodplains are threatened ecosystems and are not only ecologically meaningful but also important for humans by creating multiple benefits. Many underlying functions, like nutrient retention, carbon sequestration or water regulation, strongly depend on regular inundation. So far, these are approached on the basis of what are called ‘active floodplains’. Active floodplains, defined as statistically inundated once every 100 years, represent less than 10% of a floodplain’s original size. Still, should this remaining area be considered as one homogenous surface in terms of floodplain function, or are there any alternative approaches to quantify ecologically active floodplains? With the European Flood Hazard Maps, the extent of not only medium floods (T-medium) but also frequent floods (T-frequent) needs to be modelled by all member states of the European Union. For large German rivers, both scenarios were compared to quantify the extent, as well as selected indicators for naturalness derived from inundation. It is assumed that the more naturalness there is, the more inundation and the better the functioning. Real inundation was quantified using measured discharges from relevant gauges over the past 20 years. As a result, land uses indicating strong human impacts changed significantly from T-frequent to T-medium floodplains. Furthermore, the extent, water depth and water volume stored in the T-frequent and T-medium floodplains is significantly different. Even T-frequent floodplains experienced inundation for only half of the considered gauges during the past 20 years. This study gives evidence for considering regulation functions on the basis of ecologically active floodplains, meaning in floodplains with more frequent inundation that T-medium floodplains delineate.
Today, the Mekong Delta in the southern of Vietnam is home for 18 million people. The delta also accounts for more than half of the country’s food production and 80% of the exported rice. Due to the low elevation, it is highly susceptible to the risk of fluvial and coastal flooding. Although extreme floods often result in excessive damages and economic losses, the annual flood pulse from the Mekong is vital to sustain agricultural cultivation and livelihoods of million delta inhabitants.
Delta-wise risk management and adaptation strategies are required to mitigate the adverse impacts from extreme events while capitalising benefits from floods. However, a proper flood risk management has not been implemented in the VMD, because the quantification of flood damage is often overlooked and the risks are thus not quantified. So far, flood management has been exclusively focused on engineering measures, i.e. high- and low- dyke systems, aiming at flood-free or partial inundation control without any consideration of the actual risks or a cost-benefit analysis. Therefore, an analysis of future delta flood dynamics driven these stressors is valuable to facilitate the transition from sole hazard control towards a risk management approach, which is more cost-effective and also robust against future changes in risk.
Built on these research gaps, this thesis investigates the current state and future projections of flood hazard, damage and risk to rice cultivation, the most important economic activity in the VMD. The study quantifies the changes in risk and hazard brought by the development of delta-based flood control measures in the last decades, and analyses the expected changes in risk driven by the changing climate, rising sea-level and deltaic land subsidence, and finally the development of hydropower projects in the Mekong Basin. For this purpose, flood trend analyses and comprehensive hydraulic modelling were performed, together with the development of a concept to quantify flood damage and risk to rice plantation.
The analysis of observed flood levels revealed strong and robust increasing trends of peak and duration downstream of the high-dyke areas with a step change in 2000/2001, i.e. after the disastrous flood which initiated the high-dyke development. These changes were in contrast to the negative trends detected upstream, suggested that high-dyke development has shifted flood hazard downstream. Findings of the trend’s analysis were later confirmed by hydraulic simulations of the two recent extreme floods in 2000 and 2011, where the hydrological boundaries and dyke system settings were interchanged.
However, the high-dyke system was not the only and often not the main cause for a shift of flood hazard, as a comparative analysis of these two extreme floods proved. The high-dyke development was responsible for 20–90% of the observed changes in flood level between 2000 and 2011, with large spatial variances. The particular flood hydrograph of the two events had the highest contribution in the northern part of the delta, while the tidal level had 2–3 times higher influence than the high-dyke in the lower-central and coastal areas downstream of high-dyke areas. The impact of the high-dyke development was highest in the areas closely downstream of the high-dyke area just south of the Cambodia-Vietnam border. The hydraulic simulations also validated that the concurrence of the flood peak with spring tides, i.e. high sea level along the coast, amplified the flood level and inundation in the central and coastal regions substantially.
The risk assessment quantified the economic losses of rice cultivation to USD 25.0 and 115 million (0.02–0.1% of the total GDP of Vietnam in 2011) corresponding to the 10-year and the 100-year floods, with an expected annual damage of about USD 4.5 million. A particular finding is that the flood damage was highly sensitive to flood timing. Here, a 10-year event with an early peak, i.e. late August-September, could cause as much damage as a 100-year event that peaked in October. This finding underlines the importance of a reliable early flood warning, which could substantially reduce the damage to rice crops and thus the risk.
The developed risk assessment concept was furthermore applied to investigate two high-dyke development alternatives, which are currently under discussion among the administrative bodies in Vietnam, but also in the public. The first option favouring the utilization of the current high-dyke compartments as flood retention areas instead for rice cropping during the flood season could reduce flood hazard and expected losses by 5–40%, depending on the region of the delta. On the contrary, the second option promoting the further extension of the areas protected by high-dyke to facilitate third rice crop planting on a larger area, tripled the current expected annual flood damage. This finding challenges the expected economic benefit of triple rice cultivation, in addition to the already known reducing of nutrient supply by floodplain sedimentation and thus higher costs for fertilizers.
The economic benefits of the high-dyke and triple rice cropping system is further challenged by the changes in the flood dynamics to be expected in future. For the middle of the 21st century (2036-2065) the effective sea-level rise an increase of the inundation extent by 20–27% was projected. This corresponds to an increase of flood damage to rice crops in dry, normal and wet year by USD 26.0, 40.0 and 82.0 million in dry, normal and wet year compared to the baseline period 1971-2000.
Hydraulic simulations indicated that the planned massive development of hydropower dams in the Mekong Basin could potentially compensate the increase in flood hazard and agriculture losses stemming from climate change. However, the benefits of dams as mitigation of flood losses are highly uncertain, because a) the actual development of the dams is highly disputed, b) the operation of the dams is primarily targeted at power generation, not flood control, and c) this would require international agreements and cooperation, which is difficult to achieve in South-East Asia. The theoretical flood mitigation benefit is additionally challenged by a number of negative impacts of the dam development, e.g. disruption of floodplain inundation in normal, non-extreme flood years. Adding to the certain reduction of sediment and nutrient load to the floodplains, hydropower dams will drastically impair rice and agriculture production, the basis livelihoods of million delta inhabitants.
In conclusion, the VMD is expected to face increasing threats of tidal induced floods in the coming decades. Protection of the entire delta coastline solely with “hard” engineering flood protection structures is neither technically nor economically feasible, adaptation and mitigation actions are urgently required. Better control and reduction of groundwater abstraction is thus strongly recommended as an immediate and high priority action to reduce the land subsidence and thus tidal flooding and salinity intrusion in the delta. Hydropower development in the Mekong basin might offer some theoretical flood protection for the Mekong delta, but due to uncertainties in the operation of the dams and a number of negative effects, the dam development cannot be recommended as a strategy for flood management. For the Vietnamese authorities, it is advisable to properly maintain the existing flood protection structures and to develop flexible risk-based flood management plans. In this context the study showed that the high-dyke compartments can be utilized for emergency flood management in extreme events. For this purpose, a reliable flood forecast is essential, and the action plan should be materialised in official documents and legislation to assure commitment and consistency in the implementation and operation.
Cities can be severely affected by climate change. Hence, many of them have started to develop climate adaptation strategies or implement measures to help prepare for the challenges it will present. This study aims to provide an overview of climate adaptation in 104 German cities. While existing studies on adaptation tracking rely heavily on self-reported data or the mere existence of adaptation plans, we applied the broader concept of adaptation readiness, considering five factors and a total of twelve different indicators, when making our assessments. We clustered the cities depending on the contribution of these factors to the overall adaptation readiness index and grouped them according to their total score and cluster affiliations. This resulted in us identifying four groups of cities. First, a pioneering group comprises twelve (mainly big) cities with more than 500,000 inhabitants, which showed high scores for all five factors of adaptation readiness. Second, a set of 36 active cities, which follow different strategies on how to deal with climate adaptation. Third, a group of 28 cities showed considerably less activity toward climate adaptation, while a fourth set of 28 mostly small cities (with between 50,000 and 99,999 inhabitants) scored the lowest. We consider this final group to be pursuing a 'wait-and-see' approach. Since the city size correlates with the adaptation readiness index, we recommend policymakers introduce funding schemes that focus on supporting small cities, to help them prepare for the impact of a changing climate.
Ranking local climate policy
(2021)
Climate mitigation and climate adaptation are crucial tasks for urban areas and can involve synergies as well as trade-offs. However, few studies have examined how mitigation and adaptation efforts relate to each other in a large number of differently sized cities, and therefore we know little about whether forerunners in mitigation are also leading in adaptation or if cities tend to focus on just one policy field. This article develops an internationally applicable approach to rank cities on climate policy that incorporates multiple indicators related to (1) local commitments on mitigation and adaptation, (2) urban mitigation and adaptation plans and (3) climate adaptation and mitigation ambitions. We apply this method to rank 104 differently sized German cities and identify six clusters: climate policy leaders, climate adaptation leaders, climate mitigation leaders, climate policy followers, climate policy latecomers and climate policy laggards. The article seeks explanations for particular cities' positions and shows that coping with climate change in a balanced way on a high level depends on structural factors, in particular city size, the pathways of local climate policies since the 1990s and funding programmes for both climate mitigation and adaptation.
Dense tree stands and high wind speeds characterize the temperate rainforests of southern Chilean Patagonia, where landslides frequently strip hillslopes of soils, rock, and biomass. Assuming that wind loads on trees promote slope instability, we explore the role of forest cover and wind speed in predicting landslides with a hierarchical Bayesian logistic regression. We find that higher crown openness and wind speeds credibly predict higher probabilities of detecting landslides regardless of topographic location, though much better in low-order channels and on midslope locations than on open slopes. Wind speed has less predictive power in areas that were impacted by tephra fall from recent volcanic eruptions, while the influence of forest cover in terms of crown openness remains. <br /> Plain Language Summary Chilean Patagonia hosts some of Earth's largest swaths of temperate rainforests, where frequent landslides erode soil, rock, and vegetation. We explore the role of forest cover and wind disturbances in promoting such landslides with a model that predicts from crown openness and wind speed the probability of detecting landslide terrain. We find that both forest cover and wind speed play important, yet previously underappreciated, roles in this context, especially when grouped by landform types and previous volcanic disturbance, which may override the comparable modest control of wind on landslides. Our study is the first of its kind in one of the windiest spots on Earth and encourages a more discerning approach to landslide prediction.
In der derzeitigen Wahrnehmung werden die Sommer dürrer, heißer und extremer – dieser Eindruck verstärkt sich im urbanen Raum durch das Auftreten von Hitzeinseleffekten in dicht bebauten Gebieten. Um das wirkliche Ausmaß der Dürre bewerten zu können, wurden Zeitreihendaten von 31 urbanen Klimastationen (DWD) für den Zeitraum 1950 bis 2019 mittels des standardisierten Niederschlagsindex (SPI) bezüglich Dürrelängen, Dürreextrema, Hitzewellen und gleichzeitig auftretenden Hitze- und Dürremonaten ausgewertet.
Die Analyse zeigt eine große Heterogenität innerhalb von Deutschland: In den meisten Städten trat 2018 eine lange Dürre von einer durchschnittlichen Dauer von 6 Monaten auf, gleichzeitig gehörte das Jahr 2018 nur bei einem Drittel der Städte zu den drei Jahren mit den längsten Dürren seit 1950. Bei den meisten betrachteten Stationen traten die längsten Dürren in den Jahren 1953, 1971 und 1976 auf. Bei einigen südlichen und mitteldeutschen Städten kann man eine statistisch signifikante Zunahme der Anzahl der Dürremonate pro Dekade seit 1950 verzeichnen. Andere Städte, eher im Norden und Nordwesten gelegen, zeigen nur in den letzten zwei Dekaden eine Zunahme oder gar keinen Trend. Die Compoundanalyse von gleichzeitig auftretenden Hitze- und Dürremonaten zeigt bei den meisten Stationen eine starke Zunahme innerhalb der letzten zwei Dekaden, wobei die beiden Komponenten regional mit einem sehr unterschiedlichen Anteil zur Zunahme der Compoundereignisse beitragen.
Due to the fact that silicon (Si) increases the resistance of plants against diverse abiotic and biotic stresses, Si nowadays is categorized as beneficial substance for plants. However, humans directly influence Si cycling on a global scale. Intensified agriculture and corresponding harvest-related Si exports lead to Si losses in agricultural soils. This anthropogenic desilication might be a big challenge for modern agriculture. However, there is still only little knowledge about Si cycling in agricultural systems of the temperate zone, because most studies focus on rice and sugarcane production in (sub)tropical areas. Furthermore, many studies are performed for a short term only, and thus do not provide the opportunity to analyze slow changes in soil-plant systems (e.g., desilication) over long periods. We analyzed soil and plant samples from an ongoing long-term field experiment (established 1963) in the temperate zone (NE Germany) to evaluate the effects of different nitrogen-phosphoruspotassium (NPK) fertilization rates and crop straw recycling (i.e., straw incorporation) on anthropogenic desilication in the long term. Our results clearly show that crop straw recycling not only prevents anthropogenic desilication (about 43-60% of Si exports can be saved by crop straw recycling in the long term), but also replenishes plant available Si stocks of agricultural soil-plant systems. Furthermore, we found that a reduction of N fertilization rates of about 69% is possible without considerable biomass losses. This economy of the need for N fertilizers potentially can be combined with the benefits of crop straw recycling, i.e., enhancement of carbon sequestration via straw inputs and prevention of anthropogenic desilication of agricultural soil-plant systems. Thus crop straw recycling might have the potential to act as key management practice in sustainable, low fertilization agriculture in the temperate zone in the future.
Rising temperatures in the Arctic affect soil microorganisms, herbivores, and peatland vegetation, thus directly and indirectly influencing microbial CH4 production. It is not currently known how methanotrophs in Arctic peat respond to combined changes in temperature, CH4 concentration, and vegetation. We studied methanotroph responses to temperature and CH4 concentration in peat exposed to herbivory and protected by exclosures. The methanotroph activity was assessed by CH4 oxidation rate measurements using peat soil microcosms and a pure culture of Methylobacter tundripaludum SV96, qPCR, and sequencing of pmoA transcripts. Elevated CH4 concentrations led to higher CH4 oxidation rates both in grazed and exclosed peat soils, but the strongest response was observed in grazed peat soils. Furthermore, the relative transcriptional activities of different methanotroph community members were affected by the CH4 concentrations. While transcriptional responses to low CH4 concentrations were more prevalent in grazed peat soils, responses to high CH4 concentrations were more prevalent in exclosed peat soils. We observed no significant methanotroph responses to increasing temperatures. We conclude that methanotroph communities in these peat soils respond to changes in the CH4 concentration depending on their previous exposure to grazing. This "conditioning " influences which strains will thrive and, therefore, determines the function of the methanotroph community.
Field-scale subsurface flow processes are difficult to observe and monitor. We investigated the value of gravity time series to identify subsurface flow processes by carrying out a sprinkling experiment in the direct vicinity of a superconducting gravimeter. We demonstrate how different water mass distributions in the subsoil affect the gravity signal and show the benefit of using the shape of the gravity response curve to identify different subsurface flow processes. For this purpose, a simple hydro-gravimetric model was set up to test different scenarios in an optimization approach, including the processes macropore flow, preferential flow, wetting front advancement (WFA), bypass flow and perched water table rise. Besides the gravity observations, electrical resistivity and soil moisture data were used for evaluation. For the study site, the process combination of preferential flow and WFA led to the best correspondence to the observations in a multi-criteria assessment. We argue that the approach of combining field-scale sprinkling experiments in combination with gravity monitoring can be transferred to other sites for process identification, and discuss related uncertainties including limitations of the simple model used here. The study stresses the value of advancing terrestrial gravimetry as an integrative and non-invasive monitoring technique for assessing hydrological states and dynamics.
Die Quantifizierung der Biomasse von Pflanzen mithilfe effizienter Messmethoden hat für verschiedene Wissenschaftsbereiche eine große Bedeutung. Die vorliegende Arbeit soll es ermöglichen, über die einzelbaumbasierte Schätzung der oberirdischen Biomasse einer Apfel- und einer Kirschkultur am Forschungsstandort Marquardt (Potsdam) auf die Menge des in ihr enthaltenen Wasserstoffs zu schließen. Hierzu wurde das Volumen von 13 Kirsch- und 11 Apfelbäumen bestimmt, indem sie in Segmente unterteilt, diese einzeln vermessen und in Durchmesserklassen eingeteilt wurden. Des Weiteren wurden die Dichte der Zweige und die mittlere Laubmasse bestimmt. Zur Berechnung der Biomasse wurde zusätzlich ein Literaturwert der Holzdichte der entsprechenden Baumarten herangezogen. Es wurde die Verteilung der Holzbiomasse auf die einzelnen Durchmesserklassen untersucht und einfach zu erhebende Baumparameter, sowie Daten eines Terrestrischen Laserscanners als Prädiktorvariablen für eine Regressionsanalyse herangezogen. Die experimentell ermittelten Dichtewerte zeigten eine Zunahme mit steigendem Zweigdurchmesser. Dabei wichen sie beim Kirschbaumholz leicht, beim Apfelbaumholz stärker vom Literaturwert ab. Die Erhebungen zur Laubmasse wurden unabhängig von den vermessenen Bäumen gemacht und die Ergebnisse unterlagen großer Varianz, weshalb kein Zusammenhang zwischen Holz- und Laubbiomasse hergestellt und nur durchschnittliche Werte ermittelt werden konnten. Der Anteil der verschiedenen Durchmesserklassen an der Gesamtmasse erwies sich als stark variabel und eine Schätzung der Biomasse aus dem Gewicht weniger kräftiger Baumsegmente daher als nicht geeignet. Eine zuverlässige und effiziente Abschätzung der oberirdischen verholzten Biomasse kann jedoch durch die Anwendung der erstellten Modelle erreicht werden. Für die vorliegende Population gleichaltriger und ähnlich großer Individuen wurden mit einer linearen Regression die besten Ergebnisse erzielt. Während die auf Laserdaten basierenden Variablen kaum mit der Holzbiomasse korrelierten, zeigten lineare Modelle mit dem Stammdurchmesser d bzw. d² als Prädiktor bei beiden Baumarten eine hohe Signifikanz (p - Wert < 0.001) und sehr gute Anpassung (R² > 0.8).
River flooding poses a threat to numerous cities and communities all over the world. The detection, quantification and attribution of changes in flood characteristics is key to assess changes in flood hazard and help affected societies to timely mitigate and adapt to emerging risks. The Rhine River is one of the major European rivers and numerous large cities reside at its shores. Runoff from several large tributaries superimposes in the main channel shaping the complex from regime. Rainfall, snowmelt as well as ice-melt are important runoff components. The main objective of this thesis is the investigation of a possible transient merging of nival and pluvial Rhine flood regimes under global warming. Rising temperatures cause snowmelt to occur earlier in the year and rainfall to be more intense. The superposition of snowmelt-induced floods originating from the Alps with more intense rainfall-induced runoff from pluvial-type tributaries might create a new flood type with potentially disastrous consequences.
To introduce the topic of changing hydrological flow regimes, an interactive web application that enables the investigation of runoff timing and runoff season- ality observed at river gauges all over the world is presented. The exploration and comparison of a great diversity of river gauges in the Rhine River Basin and beyond indicates that river systems around the world undergo fundamental changes. In hazard and risk research, the provision of background as well as real-time information to residents and decision-makers in an easy accessible way is of great importance. Future studies need to further harness the potential of scientifically engineered online tools to improve the communication of information related to hazards and risks.
A next step is the development of a cascading sequence of analytical tools to investigate long-term changes in hydro-climatic time series. The combination of quantile sampling with moving average trend statistics and empirical mode decomposition allows for the extraction of high resolution signals and the identification of mechanisms driving changes in river runoff. Results point out that the construction and operation of large reservoirs in the Alps is an important factor redistributing runoff from summer to winter and hint at more (intense) rainfall in recent decades, particularly during winter, in turn increasing high runoff quantiles. The development and application of the analytical sequence represents a further step in the scientific quest to disentangling natural variability, climate change signals and direct human impacts.
The in-depth analysis of in situ snow measurements and the simulations of the Alpine snow cover using a physically-based snow model enable the quantification of changes in snowmelt in the sub-basin upstream gauge Basel. Results confirm previous investigations indicating that rising temperatures result in a decrease in maximum melt rates. Extending these findings to a catchment perspective, a threefold effect of rising temperatures can be identified: snowmelt becomes weaker, occurs earlier and forms at higher elevations. Furthermore, results indicate that due to the wide range of elevations in the basin, snowmelt does not occur simultaneously at all elevation, but elevation bands melt together in blocks. The beginning and end of the release of meltwater seem to be determined by the passage of warm air masses, and the respective elevation range affected by accompanying temperatures and snow availability. Following those findings, a hypothesis describing elevation-dependent compensation effects in snowmelt is introduced: In a warmer world with similar sequences of weather conditions, snowmelt is moved upward to higher elevations, i.e., the block of elevation bands providing most water to the snowmelt-induced runoff is located at higher elevations. The movement upward the elevation range makes snowmelt in individual elevation bands occur earlier. The timing of the snowmelt-induced runoff, however, stays the same. Meltwater from higher elevations, at least partly, replaces meltwater from elevations below.
The insights on past and present changes in river runoff, snow covers and underlying mechanisms form the basis of investigations of potential future changes in Rhine River runoff. The mesoscale Hydrological Model (mHM) forced with an ensemble of climate projection scenarios is used to analyse future changes in streamflow, snowmelt, precipitation and evapotranspiration at 1.5, 2.0 and
3.0 ◦ C global warming. Simulation results suggest that future changes in flood characteristics in the Rhine River Basin are controlled by increased precipitation amounts on the one hand, and reduced snowmelt on the other hand. Rising temperatures deplete seasonal snowpacks. At no time during the year, a warming climate results in an increase in the risk of snowmelt-driven flooding. Counterbalancing effects between snowmelt and precipitation often result in only little and transient changes in streamflow peaks. Although, investigations point at changes in both rainfall and snowmelt-driven runoff, there are no indications of a transient merging of nival and pluvial Rhine flood regimes due to climate warming. Flooding in the main tributaries of the Rhine, such as the Moselle River, as well as the High Rhine is controlled by both precipitation and snowmelt. Caution has to be exercised labelling sub-basins such as the Moselle catchment as purely pluvial-type or the Rhine River Basin at Basel as purely nival-type. Results indicate that this (over-) simplifications can entail misleading assumptions with regard to flood-generating mechanisms and changes in flood hazard. In the framework of this thesis, some progress has been made in detecting, quantifying and attributing past, present and future changes in Rhine flow/flood characteristics. However, further studies are necessary to pin down future changes in the flood genesis of Rhine floods, particularly very rare events.