Refine
Year of publication
- 2021 (92) (remove)
Document Type
- Article (68)
- Doctoral Thesis (9)
- Postprint (8)
- Bachelor Thesis (2)
- Report (2)
- Conference Proceeding (1)
- Habilitation Thesis (1)
- Other (1)
Is part of the Bibliography
- yes (92)
Keywords
- Germany (4)
- flood risk (4)
- Air pollution (3)
- Extreme events (3)
- land use (3)
- preparedness (3)
- Chile (2)
- City ranking (2)
- Climate change (2)
- Cluster analysis (2)
Institute
- Institut für Umweltwissenschaften und Geographie (92) (remove)
This paper presents two new pollen records and quantitative climate reconstructions from northern Chukotka documenting environmental changes over the last 27.9 ka. Open tundra- and steppe-like habitats dominated between 27.9 and 18.7 cal. ka BP. Betula and Alnus shrubs might have grown in sheltered microhabitats but disappeared after 18.7 cal. ka BP. Although the climate was rather harsh, local herb-dominated communities supported herbivores as is evident by the presence of coprophilous spores in the sediments. The increase in Salix and Cyperaceae similar to 16.1 cal. ka BP suggests climate amelioration. Shrub Betula appeared similar to 15.9 cal. ka BP, and became dominant after similar to 15.52 cal. ka BP, whilst typical steppe communities drastically reduced. Very high presence of Botryococcus in the Lateglacial sediments reflects widespread shallow habitats, probably due to lake level increase. Shrub Alnus became common after similar to 13 cal. ka BP reflecting further climate amelioration. Simultaneously, herb communities gradually decreased in the vegetation reaching a minimum similar to 11.8 cal. ka BP. A gradual decrease of algae remains suggests a reduction of shallow-water habitats. Shrubby and graminoid tundra was dominant similar to 11.8-11.1 cal. ka BP, later Salix stands significantly decreased. The forest-tundra ecotone established in the Early Holocene, shortly after 11.1 cal. ka BP. Low contents of green algae in the Early Holocene sediments likely reflect deeper aquatic conditions. The most favourable climate conditions were between similar to 10.6 and 7 cal. ka BP. Vegetation became similar to the modern after similar to 7 cal. ka BP but Pinus pumila came to the Ilirney area at about 1.2 cal. ka BP. It is important to emphasize that the study area provided refugia for Betula and Alnus during MIS 2. It is also notable that our records do not reflect evidence of Younger Dryas cooling, which is inconsistent with some regional environmental records but in good accordance with some others.
Precipitation forecasting has an important place in everyday life – during the day we may have tens of small talks discussing the likelihood that it will rain this evening or weekend. Should you take an umbrella for a walk? Or should you invite your friends for a barbecue? It will certainly depend on what your weather application shows.
While for years people were guided by the precipitation forecasts issued for a particular region or city several times a day, the widespread availability of weather radars allowed us to obtain forecasts at much higher spatiotemporal resolution of minutes in time and hundreds of meters in space. Hence, radar-based precipitation nowcasting, that is, very-short-range forecasting (typically up to 1–3 h), has become an essential technique, also in various professional application contexts, e.g., early warning, sewage control, or agriculture.
There are two major components comprising a system for precipitation nowcasting: radar-based precipitation estimates, and models to extrapolate that precipitation to the imminent future. While acknowledging the fundamental importance of radar-based precipitation retrieval for precipitation nowcasts, this thesis focuses only on the model development: the establishment of open and competitive benchmark models, the investigation of the potential of deep learning, and the development of procedures for nowcast errors diagnosis and isolation that can guide model development.
The present landscape of computational models for precipitation nowcasting still struggles with the availability of open software implementations that could serve as benchmarks for measuring progress. Focusing on this gap, we have developed and extensively benchmarked a stack of models based on different optical flow algorithms for the tracking step and a set of parsimonious extrapolation procedures based on image warping and advection. We demonstrate that these models provide skillful predictions comparable with or even superior to state-of-the-art operational software. We distribute the corresponding set of models as a software library, rainymotion, which is written in the Python programming language and openly available at GitHub (https://github.com/hydrogo/rainymotion). That way, the library acts as a tool for providing fast, open, and transparent solutions that could serve as a benchmark for further model development and hypothesis testing.
One of the promising directions for model development is to challenge the potential of deep learning – a subfield of machine learning that refers to artificial neural networks with deep architectures, which may consist of many computational layers. Deep learning showed promising results in many fields of computer science, such as image and speech recognition, or natural language processing, where it started to dramatically outperform reference methods.
The high benefit of using "big data" for training is among the main reasons for that. Hence, the emerging interest in deep learning in atmospheric sciences is also caused and concerted with the increasing availability of data – both observational and model-based. The large archives of weather radar data provide a solid basis for investigation of deep learning potential in precipitation nowcasting: one year of national 5-min composites for Germany comprises around 85 billion data points.
To this aim, we present RainNet, a deep convolutional neural network for radar-based precipitation nowcasting. RainNet was trained to predict continuous precipitation intensities at a lead time of 5 min, using several years of quality-controlled weather radar composites provided by the German Weather Service (DWD). That data set covers Germany with a spatial domain of 900 km x 900 km and has a resolution of 1 km in space and 5 min in time. Independent verification experiments were carried out on 11 summer precipitation events from 2016 to 2017. In these experiments, RainNet was applied recursively in order to achieve lead times of up to 1 h. In the verification experiments, trivial Eulerian persistence and a conventional model based on optical flow served as benchmarks. The latter is available in the previously developed rainymotion library.
RainNet significantly outperformed the benchmark models at all lead times up to 60 min for the routine verification metrics mean absolute error (MAE) and critical success index (CSI) at intensity thresholds of 0.125, 1, and 5 mm/h. However, rainymotion turned out to be superior in predicting the exceedance of higher intensity thresholds (here 10 and 15 mm/h). The limited ability of RainNet to predict high rainfall intensities is an undesirable property which we attribute to a high level of spatial smoothing introduced by the model. At a lead time of 5 min, an analysis of power spectral density confirmed a significant loss of spectral power at length scales of 16 km and below.
Obviously, RainNet had learned an optimal level of smoothing to produce a nowcast at 5 min lead time. In that sense, the loss of spectral power at small scales is informative, too, as it reflects the limits of predictability as a function of spatial scale. Beyond the lead time of 5 min, however, the increasing level of smoothing is a mere artifact – an analogue to numerical diffusion – that is not a property of RainNet itself but of its recursive application. In the context of early warning, the smoothing is particularly unfavorable since pronounced features of intense precipitation tend to get lost over longer lead times. Hence, we propose several options to address this issue in prospective research on model development for precipitation nowcasting, including an adjustment of the loss function for model training, model training for longer lead times, and the prediction of threshold exceedance.
The model development together with the verification experiments for both conventional and deep learning model predictions also revealed the need to better understand the source of forecast errors. Understanding the dominant sources of error in specific situations should help in guiding further model improvement. The total error of a precipitation nowcast consists of an error in the predicted location of a precipitation feature and an error in the change of precipitation intensity over lead time. So far, verification measures did not allow to isolate the location error, making it difficult to specifically improve nowcast models with regard to location prediction.
To fill this gap, we introduced a framework to directly quantify the location error. To that end, we detect and track scale-invariant precipitation features (corners) in radar images. We then consider these observed tracks as the true reference in order to evaluate the performance (or, inversely, the error) of any model that aims to predict the future location of a precipitation feature. Hence, the location error of a forecast at any lead time ahead of the forecast time corresponds to the Euclidean distance between the observed and the predicted feature location at the corresponding lead time.
Based on this framework, we carried out a benchmarking case study using one year worth of weather radar composites of the DWD. We evaluated the performance of four extrapolation models, two of which are based on the linear extrapolation of corner motion; and the remaining two are based on the Dense Inverse Search (DIS) method: motion vectors obtained from DIS are used to predict feature locations by linear and Semi-Lagrangian extrapolation.
For all competing models, the mean location error exceeds a distance of 5 km after 60 min, and 10 km after 110 min. At least 25% of all forecasts exceed an error of 5 km after 50 min, and of 10 km after 90 min. Even for the best models in our experiment, at least 5 percent of the forecasts will have a location error of more than 10 km after 45 min. When we relate such errors to application scenarios that are typically suggested for precipitation nowcasting, e.g., early warning, it becomes obvious that location errors matter: the order of magnitude of these errors is about the same as the typical extent of a convective cell. Hence, the uncertainty of precipitation nowcasts at such length scales – just as a result of locational errors – can be substantial already at lead times of less than 1 h. Being able to quantify the location error should hence guide any model development that is targeted towards its minimization. To that aim, we also consider the high potential of using deep learning architectures specific to the assimilation of sequential (track) data.
Last but not least, the thesis demonstrates the benefits of a general movement towards open science for model development in the field of precipitation nowcasting. All the presented models and frameworks are distributed as open repositories, thus enhancing transparency and reproducibility of the methodological approach. Furthermore, they are readily available to be used for further research studies, as well as for practical applications.
For around a decade, deep learning - the sub-field of machine learning that refers to artificial neural networks comprised of many computational layers - modifies the landscape of statistical model development in many research areas, such as image classification, machine translation, and speech recognition. Geoscientific disciplines in general and the field of hydrology in particular, also do not stand aside from this movement. Recently, the proliferation of modern deep learning-based techniques and methods has been actively gaining popularity for solving a wide range of hydrological problems: modeling and forecasting of river runoff, hydrological model parameters regionalization, assessment of available water resources. identification of the main drivers of the recent change in water balance components. This growing popularity of deep neural networks is primarily due to their high universality and efficiency. The presented qualities, together with the rapidly growing amount of accumulated environmental information, as well as increasing availability of computing facilities and resources, allow us to speak about deep neural networks as a new generation of mathematical models designed to, if not to replace existing solutions, but significantly enrich the field of geophysical processes modeling. This paper provides a brief overview of the current state of the field of development and application of deep neural networks in hydrology. Also in the following study, the qualitative long-term forecast regarding the development of deep learning technology for managing the corresponding hydrological modeling challenges is provided based on the use of "Gartner Hype Curve", which in the general details describes a life cycle of modern technologies.
We systematically explore the effect of calibration data length on the performance of a conceptual hydrological model, GR4H, in comparison to two Artificial Neural Network (ANN) architectures: Long Short-Term Memory Networks (LSTM) and Gated Recurrent Units (GRU), which have just recently been introduced to the field of hydrology. We implemented a case study for six river basins across the contiguous United States, with 25 years of meteorological and discharge data. Nine years were reserved for independent validation; two years were used as a warm-up period, one year for each of the calibration and validation periods, respectively; from the remaining 14 years, we sampled increasing amounts of data for model calibration, and found pronounced differences in model performance. While GR4H required less data to converge, LSTM and GRU caught up at a remarkable rate, considering their number of parameters. Also, LSTM and GRU exhibited the higher calibration instability in comparison to GR4H. These findings confirm the potential of modern deep-learning architectures in rainfall runoff modelling, but also highlight the noticeable differences between them in regard to the effect of calibration data length.
Starkregen in Berlin
(2021)
In den Sommern der Jahre 2017 und 2019 kam es in Berlin an mehreren Orten zu Überschwemmungen in Folge von Starkregenereignissen. In beiden Jahren führte dies zu erheblichen Beeinträchtigungen im Alltag der Berliner:innen sowie zu hohen Sachschäden. Eine interdisziplinäre Taskforce des DFG-Graduiertenkollegs NatRiskChange untersuchte (1) die meteorologischen Eigenschaften zweier besonders eindrücklicher Unwetter, sowie (2) die Vulnerabilität der Berliner Bevölkerung gegenüber Starkregen.
Eine vergleichende meteorologische Rekonstruktion der Starkregenereignisse von 2017 und 2019 ergab deutliche Unterschiede in der Entstehung und den Überschreitungswahrscheinlichkeiten der beiden Unwetter. So war das Ereignis von 2017 mit einer relativ großen räumlichen Ausdehnung und langer Dauer ein untypisches Starkregenereignis, während es sich bei dem Unwetter von 2019 um ein typisches, kurzzeitiges Starkregenereignis mit ausgeprägter räumlicher Heterogenität handelte. Eine anschließende statistische Analyse zeigte, dass das Ereignis von 2017 für längere Niederschlagsdauern (>=24 h) als großflächiges Extremereignis mit Überschreitungswahrscheinlichkeiten von unter 1 % einzuordnen ist (d.h. Wiederkehrperioden >=100 Jahre). Im Jahr 2019 wurden dagegen ähnliche Überschreitungswahrscheinlichkeiten nur lokal und für kürzere Zeiträume (1-2 h) berechnet.
Die Vulnerabilitätsanalyse basiert auf einer von April bis Juni 2020 in Berlin durchgeführten Onlinebefragung. Diese richtete sich an Personen, die bereits von vergangenen Starkregenereignissen betroffen waren und thematisierte das Schadensereignis selbst, daraus entstandene Beeinträchtigungen und Schäden, Risikowahrnehmung sowie Notfall- und Vorsorgemaßnahmen. Die erhobenen Umfragedaten (n=102) beziehen sich vornehmlich auf die Ereignisse von 2017 und 2019 und zeigen, dass die Berliner Bevölkerung sowohl im Alltag (z.B. bei der Beschaffung von Lebensmitteln) als auch im eigenen Haushalt (z.B. durch Überschwemmungsschäden) von den Unwettern beeinträchtigt war. Zudem deuteten die Antworten der Betroffenen auf Möglichkeiten hin, die Vulnerabilität der Gesellschaft gegenüber Starkregen weiter zu reduzieren - etwa durch die Unterstützung besonders betroffener Gruppen (z.B. Pflegende), durch gezielte Informationskampagnen zum Schutz vor Starkregen oder durch die Erhöhung der Reichweite von Unwetterwarnungen. Eine statistische Analyse zur Effektivität privater Notfall- und Vorsorgemaßnahmen auf Grundlage der Umfragedaten bestätigte vorherige Studienergebnisse.
So gab es Anhaltspunkte dafür, dass durch das Umsetzen von Vorsorgemaßnahmen wie beispielsweise das Installieren von Rückstauklappen, Barriere-Systemen oder Pumpen Starkregenschäden reduziert werden können.
Die Ergebnisse dieses Berichts unterstreichen die Notwendigkeit für ein integriertes Starkregenrisikomanagment, das die Risikokomponenten Gefährdung, Vulnerabilität und Exposition ganzheitlich und auf mehreren Ebenen (z.B. staatlich, kommunal, privat) betrachtet.
The efficiency of sediment routing from land to the ocean depends on the position of submarine canyon heads with regard to terrestrial sediment sources. We aim to identify the main controls on whether a submarine canyon head remains connected to terrestrial sediment input during Holocene sea-level rise. Globally, we identified 798 canyon heads that are currently located at the 120m-depth contour (the Last Glacial Maximum shoreline) and 183 canyon heads that are connected to the shore (within a distance of 6 km) during the present-day highstand. Regional hotspots of shore-connected canyons are the Mediterranean active margin and the Pacific coast of Central and South America. We used 34 terrestrial and marine predictor variables to predict shore-connected canyon occurrence using Bayesian regression. Our analysis shows that steep and narrow shelves facilitate canyon-head connectivity to the shore. Moreover, shore-connected canyons occur preferentially along active margins characterized by resistant bedrock and high river-water discharge.
Throughfall, that is, the fraction of rainfall that passes through the forest canopy, is strongly influenced by rainfall and forest stand characteristics which are in turn both subject to seasonal dynamics. Disentangling the complex interplay of these controls is challenging, and only possible with long-term monitoring and a large number of throughfall events measured in parallel at different forest stands. We therefore based our analysis on 346 rainfall events across six different forest stands at the long-term terrestrial environmental observatory TERENO Northeast Germany. These forest stands included pure stands of beech, pine and young pine, and mixed stands of oak-beech, pine-beech and pine-oak-beech. Throughfall was overall relatively low, with 54-68% of incident rainfall in summer. Based on the large number of events it was possible to not only investigate mean or cumulative throughfall but also its statistical distribution. The distributions of throughfall fractions show distinct differences between the three types of forest stands (deciduous, mixed and pine). The distributions of the deciduous stands have a pronounced peak at low throughfall fractions and a secondary peak at high fractions in summer, as well as a pronounced peak at higher throughfall fractions in winter. Interestingly, the mixed stands behave like deciduous stands in summer and like pine stands in winter: their summer distributions are similar to the deciduous stands but the winter peak at high throughfall fractions is much less pronounced. The seasonal comparison further revealed that the wooden components and the leaves behaved differently in their throughfall response to incident rainfall, especially at higher rainfall intensities. These results are of interest for estimating forest water budgets and in the context of hydrological and land surface modelling where poor simulation of throughfall would adversely impact estimates of evaporative recycling and water availability for vegetation and runoff.
Indices of oscillatory behavior are conveniently obtained by projecting the fields in question into a phase space of a few (mostly just two) dimensions; empirical orthogonal functions (EOFs) or other, more dynamical, modes are typically used for the projection. If sufficiently coherent and in quadrature, the projected variables simply describe a rotating vector in the phase space, which then serves as the basis for predictions. Using the boreal summer intraseasonal oscillation (BSISO) as a test case, an alternative procedure is introduced: it augments the original fields with their Hilbert transform (HT) to form a complex series and projects it onto its (single) dominant EOF. The real and imaginary parts of the corresponding complex pattern and index are compared with those of the original (real) EOF. The new index explains slightly less variance of the physical fields than the original, but it is much more coherent, partly from its use of future information by the HT. Because the latter is in the way of real-time monitoring, the index can only be used in cases with predicted physical fields, for which it promises to be superior. By developing a causal approximation of the HT, a real-time variant of the index is obtained whose coherency is comparable to the noncausal version, but with smaller explained variance of the physical fields. In test cases the new index compares well to other indices of BSISO. The potential for using both indices as an alternative is discussed.
Extreme Regenereignisse von kurzer Dauer im Bereich von Stunden und darunter rücken aufgrund der dadurch bedingten Schäden durch Sturzfluten und auch wegen ihrer möglichen Intensivierungen durch den anthropogenen Klimawandel immer stärker in den Fokus. Die vorliegende Studie untersucht auf Basis von teilweise sehr langen (> 50 Jahre) und zeitlich hochaufgelösten Zeitreihen (≤ 15 Minuten) mögliche Trends in Starkregenintensitäten für Stationen aus schweizerischen und österreichischen Alpenregionen sowie für das Emscher-Lippe-Gebiet in Nordrhein-Westfalen. Es wird deutlich, dass es eine Zunahme der extremen Niederschlagsintensitäten gibt, welche gut durch die Erwärmung des regionalen Klimas erklärt werden kann: Die Analysen langfristiger Trends der Überschreitungssummen und Wiederkehrniveaus zeigen zwar erhebliche Unsicherheiten, lassen jedoch eine Zunahme in einer Größenordnung von 30 % pro Jahrhundert erkennen. Zudem wird in diesem Beitrag, basierend auf einer "mittleren" Klimasimulation für das 21. Jahrhundert, für ausgewählte Stationen der Emscher-Lippe-Region eine Projektion für extreme Niederschlagsintensitäten in sehr hoher zeitlicher Auflösung beschrieben. Dabei wird ein gekoppeltes räumliches und zeitliches "Downscaling" angewendet, dessen entscheidende Neuerung die Berücksichtigung der Abhängigkeit der lokalen Regenintensität von der Lufttemperatur ist. Dieses Verfahren beinhaltet zwei Schritte: Zuerst werden großräumige Klimafelder in täglicher Auflösung durch Regression mit den Temperatur- und Niederschlagswerten der Stationen statistisch verbunden (räumliches Downscaling). Im zweiten Schritt werden dann diese Stationswerte mithilfe eines sogenannten multiplikativen stochastischen Kaskadenmodells (MC) auf eine zeitliche Auflösung von 10 Minuten disaggregiert (zeitliches Downscaling). Die neuartige, temperatursensitive Variante berücksichtigt zusätzlich die Lufttemperatur als erklärende Variable für die Niederschlagsintensitäten. Dadurch wird der mit einer Erwärmung zu erwartende höhere atmosphärische Feuchtegehalt, welcher sich aus der Clausius-Clapeyron-Beziehung (CC) ergibt, mit in das zeitliche Downscaling einbezogen.
Für die statistische Auswertung der extremen kurzfristigen Niederschläge wurden die oberen Quantile (99,9 %), Überschreitungssummen (ÜS, P > 5 mm) und 3-jährliche Wiederkehrniveaus (WN) einer Dauerstufe von ≤ 15-Minuten betrachtet. Diese Auswahl erlaubt die gleichzeitige Analyse sowohl von Extremwertstatistiken als auch von deren langfristigen Trends; leichte Abweichungen von dieser Wahl beeinflussen die Hauptergebnisse nur unwesentlich. Nur durch die Hinzunahme der Temperatur wird die beobachtete Temperaturabhängigkeit der extremen Quantile (CC-Scaling) gut wiedergegeben. Bei Vergleich von Beobachtungsdaten und Gegenwartssimulationen der Modellkaskade zeigt das temperatursensitive Verfahren konsistente Ergebnisse. Im Vergleich zu den Entwicklungen der letzten Jahrzehnte werden für die Zukunft ähnliche oder sogar noch stärkere Anstiege der extremen Niederschlagsintensitäten projiziert. Dies ist insofern bemerkenswert, als diese anscheinend hauptsächlich durch die örtliche Temperatur bestimmt werden, denn die projizierten Trends der Niederschlags-Tageswerte sind für diese Region vernachlässigbar.
Fires are a fundamental part of the Earth System. In the last decades, they have been altering ecosystem structure, biogeochemical cycles and atmospheric composition with unprecedented rapidity. In this study, we implement a complex networks-based methodology to track individual fires over space and time. We focus on extreme fires-the 5% most intense fires-in the tropical forests of the Brazilian Legal Amazon over the period 2002-2019. We analyse the interannual variability in the number and spatial patterns of extreme forest fires in years with diverse climatic conditions and anthropogenic pressure to examine potential synergies between climate and anthropogenic drivers. We observe that major droughts, that increase forest flammability, co-occur with high extreme fire years but also that it is fundamental to consider anthropogenic activities to understand the distribution of extreme fires. Deforestation fires, fires escaping from managed lands, and other types of forest degradation and fragmentation provide the ignition sources for fires to ignite in the forests. We find that all extreme forest fires identified are located within a 0.5-km distance from forest edges, and up to 56% of them are within a 1-km distance from roads (which increases to 73% within 5 km), showing a strong correlation that defines spatial patterns of extreme fires.
Relationships between climate, species composition, and species richness are of particular importance for understanding how boreal ecosystems will respond to ongoing climate change. This study aims to reconstruct changes in terrestrial vegetation composition and taxa richness during the glacial Late Pleistocene and the interglacial Holocene in the sparsely studied southeastern Yakutia (Siberia) by using pollen and sedimentary ancient DNA (sedaDNA) records. Pollen and sedaDNA metabarcoding data using the trnL g and h markers were obtained from a sediment core from Lake Bolshoe Toko. Both proxies were used to reconstruct the vegetation composition, while metabarcoding data were also used to investigate changes in plant taxa richness. The combination of pollen and sedaDNA approaches allows a robust estimation of regional and local past terrestrial vegetation composition around Bolshoe Toko during the last similar to 35,000 years. Both proxies suggest that during the Late Pleistocene, southeastern Siberia was covered by open steppe-tundra dominated by graminoids and forbs with patches of shrubs, confirming that steppe-tundra extended far south in Siberia. Both proxies show disturbance at the transition between the Late Pleistocene and the Holocene suggesting a period with scarce vegetation, changes in the hydrochemical conditions in the lake, and in sedimentation rates. Both proxies document drastic changes in vegetation composition in the early Holocene with an increased number of trees and shrubs and the appearance of new tree taxa in the lake's vicinity. The sedaDNA method suggests that the Late Pleistocene steppe-tundra vegetation supported a higher number of terrestrial plant taxa than the forested Holocene. This could be explained, for example, by the "keystone herbivore" hypothesis, which suggests that Late Pleistocene megaherbivores were able to maintain a high plant diversity. This is discussed in the light of the data with the broadly accepted species-area hypothesis as steppe-tundra covered such an extensive area during the Late Pleistocene.
The growing worldwide impact of flood events has motivated the development and application of global flood hazard models (GFHMs). These models have become useful tools for flood risk assessment and management, especially in regions where little local hazard information is available. One of the key uncertainties associated with GFHMs is the estimation of extreme flood magnitudes to generate flood hazard maps. In this study, the 1-in-100 year flood (Q100) magnitude was estimated using flow outputs from four global hydrological models (GHMs) and two global flood frequency analysis datasets for 1350 gauges across the conterminous US. The annual maximum flows of the observed and modelled timeseries of streamflow were bootstrapped to evaluate the sensitivity of the underlying data to extrapolation. Results show that there are clear spatial patterns of bias associated with each method. GHMs show a general tendency to overpredict Western US gauges and underpredict Eastern US gauges. The GloFAS and HYPE models underpredict Q100 by more than 25% in 68% and 52% of gauges, respectively. The PCR-GLOBWB and CaMa-Flood models overestimate Q100 by more than 25% at 60% and 65% of gauges in West and Central US, respectively. The global frequency analysis datasets have spatial variabilities that differ from the GHMs. We found that river basin area and topographic elevation explain some of the spatial variability in predictive performance found in this study. However, there is no single model or method that performs best everywhere, and therefore we recommend a weighted ensemble of predictions of extreme flood magnitudes should be used for large-scale flood hazard assessment.
Eine Zunahme der allgemeinen Temperatur auf Grund des Klimawandels und die damit einhergehende Zunahme von Hitzewellen führten dazu, dass das Landesamt für Umwelt und Verbraucherschutz Nordrhein-Westfalen (LANUV) einen Leitfaden für den Schutz der positiven Klimafunktion urbaner Böden herausgab. Darauf aufbauend wurde auf regionaler Ebene für die Stadt Düsseldorf die Kühlleistung der urbanen Böden quantifiziert, um besonders schutzwürdige Bereiche zu identifizieren. Im Rahmen des Projektes ExTrass sollte nun die Kühlleistung urbaner Böden innerhalb Remscheids quantifiziert werden, jedoch auf Basis von frei zugänglichen Daten. Eine solche Datengrundlage schließt eine Modellierung des Bodenwasserhaushaltes, welches die Grundlage der Quantifizierung in Düsseldorf war, für Remscheid aus. Jedoch bietet der vorgestellte Ansatz die Möglichkeit, eine solche Untersuchung auch in anderen Gemeinden innerhalb Deutschlands mit relativ wenig Aufwand durchzuführen.
Die Kühlleistung der Böden wurde über die nutzbare Feldkapazität abgeschätzt, welche das Wasserspeichervolumen der obersten durchwurzelten Bodenzone angibt. Es ist der Bodenwasserspeicher, der Wasser für die Evapotranspiration zur Verfügung stellt und damit maßgeblich die Kühlleistung eines Bodens definiert, d.h. durch direkte Evaporation des Bodenwassers sowie durch die Transpiration von Wasser durch Pflanzen. In die Erstellung der Karte sind eingegangen: (a) die Bodenkarte Nordrhein-Westfalens (BK50), um die nutzbare Feldkapazität (nFK) je Fläche zu bestimmen; (b) der Landnutzungsdatensatz UrbanAtlas 2012, in Verbindung mit einer Literaturrecherche, um den Einfluss der Landnutzung auf die Werte der nFK, insbesondere im Hinblick auf Versiegelung und Verdichtung herzuleiten; und (c) OpenStreetMap (OSM), um den Anteil der versiegelten Flächen genauer zu bestimmen, als dies auf Basis des UrbanAtlas möglich gewesen wäre.
Es hat sich gezeigt, dass dieser Ansatz geeignet ist, um die räumliche Verteilung der potenziellen Bodenkühlfunktion innerhalb einer Stadt zu untersuchen. Es ist zu beachten, dass der Einfluss des Grundwassers in Remscheid nicht berücksichtigt werden konnte. Denn es ist damit zu rechnen, dass die Grundwasserverhältnisse aufgrund der geologischen und topographischen Situation in Remscheid kleinräumig Variationen unterliegen und es somit
keinen durchgängigen und kartierten Aquifer gibt.
Kleingartenanlagen, Parks und Friedhöhe im innerstädtischen Bereich und allgemein die Landnutzungsklassen Wald und Grünland wurden als Flächen mit einem besonders hohem potenziellen Bodenkühlpotenzial identifiziert. Solche Flächen sind besonders schützenswert. Die Analyse der Speicherfüllstände der oberen Bodenzone, basierend auf der erstellten Karte der potenziellen Bodenkühlfunktion und der klimatischen Wasserbilanz, ergab, dass besonders innerstädtische Flächen, die einen kleinen Bodenwasserspeicher haben, in einem trockenen Jahr bereits früh im Sommer ihre Kühlfunktion verlieren und bei Hitzewellen somit eine verringerte positive Klimafunktion haben. Gestützt wird diese Aussage durch eine Auswertung des normalisierten differenzierten Vegetationsindex (NDVI), der genutzt wurde, um die Veränderung der Pflanzenvitalität vor und nach einer Hitzeperiode im Juni/Juli 2018 zu untersuchen.
Messungen mit Meteobikes, einer Vorrichtung, die dazu geeignet ist, während einer Radfahrt kontinuierlich die Temperatur zu messen, stützen die Erkenntnis, dass innerstädtische Grünflächen wie Parks eine positive Wirkung auf das urbane Mikroklima haben. Weiterhin zeigen diese Messungen, dass die Topographie innerhalb des Untersuchungsgebietes die Aufheizung einzelner Flächen und die Temperaturverteilung vermutlich mitbestimmt. Die hier vorgestellte Karte der potenziellen Kühlfunktion für Remscheid sollte als Ergänzung in die Klimafunktionskarte für Remscheid eingehen und den bestehenden Layer „flächenhafte Klimafunktion“, der nur die Landnutzung berücksichtigt, ersetzen.
Glacial lakes in the Hindu Kush–Karakoram–Himalayas–Nyainqentanglha (HKKHN) region have grown rapidly in number and area in past decades, and some dozens have drained in catastrophic glacial lake outburst floods (GLOFs). Estimating regional susceptibility of glacial lakes has largely relied on qualitative assessments by experts, thus motivating a more systematic and quantitative appraisal. Before the backdrop of current climate-change projections and the potential of elevation-dependent warming, an objective and regionally consistent assessment is urgently needed. We use an inventory of 3390 moraine-dammed lakes and their documented outburst history in the past four decades to test whether elevation, lake area and its rate of change, glacier-mass balance, and monsoonality are useful inputs to a probabilistic classification model. We implement these candidate predictors in four Bayesian multi-level logistic regression models to estimate the posterior susceptibility to GLOFs. We find that mostly larger lakes have been more prone to GLOFs in the past four decades regardless of the elevation band in which they occurred. We also find that including the regional average glacier-mass balance improves the model classification. In contrast, changes in lake area and monsoonality play ambiguous roles. Our study provides first quantitative evidence that GLOF susceptibility in the HKKHN scales with lake area, though less so with its dynamics. Our probabilistic prognoses offer improvement compared to a random classification based on average GLOF frequency. Yet they also reveal some major uncertainties that have remained largely unquantified previously and that challenge the applicability of single models. Ensembles of multiple models could be a viable alternative for more accurately classifying the susceptibility of moraine-dammed lakes to GLOFs.
Wildfires, as a key disturbance in forest ecosystems, are shaping the world's boreal landscapes. Changes in fire regimes are closely linked to a wide array of environmental factors, such as vegetation composition, climate change, and human activity. Arctic and boreal regions and, in particular, Siberian boreal forests are experiencing rising air and ground temperatures with the subsequent degradation of permafrost soils leading to shifts in tree cover and species composition. Compared to the boreal zones of North America or Europe, little is known about how such environmental changes might influence long-term fire regimes in Russia. The larch-dominated eastern Siberian deciduous boreal forests differ markedly from the composition of other boreal forests, yet data about past fire regimes remain sparse. Here, we present a high-resolution macroscopic charcoal record from lacustrine sediments of Lake Khamra (southwest Yakutia, Siberia) spanning the last ca. 2200 years, including information about charcoal particle sizes and morphotypes. Our results reveal a phase of increased charcoal accumulation between 600 and 900 CE, indicative of relatively high amounts of burnt biomass and high fire frequencies. This is followed by an almost 900-year-long period of low charcoal accumulation without significant peaks likely corresponding to cooler climate conditions. After 1750 CE fire frequencies and the relative amount of biomass burnt start to increase again, coinciding with a warming climate and increased anthropogenic land development after Russian colonization. In the 20th century, total charcoal accumulation decreases again to very low levels despite higher fire frequency, potentially reflecting a change in fire management strategies and/or a shift of the fire regime towards more frequent but smaller fires. A similar pattern for different charcoal morphotypes and comparison to a pollen and non-pollen palynomorph (NPP) record from the same sediment core indicate that broad-scale changes in vegetation composition were probably not a major driver of recorded fire regime changes. Instead, the fire regime of the last two millennia at Lake Khamra seems to be controlled mainly by a combination of short-term climate variability and anthropogenic fire ignition and suppression.
Knowing the source and runout of debris flows can help in planning strategies aimed at mitigating these hazards. Our research in this paper focuses on developing a novel approach for optimizing runout models for regional susceptibility modelling, with a case study in the upper Maipo River basin in the Andes of Santiago, Chile. We propose a two-stage optimization approach for automatically selecting parameters for estimating runout path and distance. This approach optimizes the random-walk and Perla et al.'s (PCM) two-parameter friction model components of the open-source Gravitational Process Path (GPP) modelling framework. To validate model performance, we assess the spatial transferability of the optimized runout model using spatial crossvalidation, including exploring the model's sensitivity to sample size. We also present diagnostic tools for visualizing uncertainties in parameter selection and model performance. Although there was considerable variation in optimal parameters for individual events, we found our runout modelling approach performed well at regional prediction of potential runout areas. We also found that although a relatively small sample size was sufficient to achieve generally good runout modelling performance, larger samples sizes (i.e. >= 80) had higher model performance and lower uncertainties for estimating runout distances at unknown locations. We anticipate that this automated approach using the open-source R software and the System for Automated Geoscientific Analyses geographic information system (SAGA-GIS) will make process-based debris-flow models more readily accessible and thus enable researchers and spatial planners to improve regional-scale hazard assessments.
Durch das anhaltende Rückschmelzen von Gletschern werden mehr Sedimentdepots freigesetzt, wodurch diese anfälliger für Erosion werden. Erhöhte Sedimentaustragsraten gefährden die Wasserqualität sowie die Wasserversorgung durch Stauraumverlandung. Um diese Gefahren und deren Abläufe besser verstehen zu können, müssen Erosionsprozesse vor allem in hochalpinen Einzugsgebieten erforscht werden. In dieser Bachelorarbeit wurden Sedimentkonzentrationen sowie weitere Umgebungsvariablen (Abfluss, Niederschlag und Temperatur) im Rofental, Ötztaler Alpen und in einem stark vergletscherten Teileinzugsgebiet des Rofentals gemessen. Um den Zusammenhang zwischen der Sedimentkonzentration und den gemessenen Umgebungsbedingungen zu ermitteln, wurde das Quantile Regression Forest Modell verwendet. Dabei wurden die Variablen zu unterschiedlichen Zeitstufen aggregiert, wodurch vergangene hydroklimatische Bedingungen berücksichtigt werden konnten. Mit der Kenntnis über den Einfluss der verschiedenen Einflussfaktoren konnte die Sedimentkonzentration rückwirkend mithilfe eines Monte Carlo Ansatzes kontinuierlich modelliert werden, wodurch Aussagen über die jährlichen Sedimentexportraten getätigt werden konnten. Weiterhin wurde auch die Trübung, welche als Indikator für die Sedimentkonzentration angesehen werden kann, gemessen. Durch die Bestimmung der Korrelation zwischen modellierten Daten und der gemessenen Trübung konnte der Aussagegehalt des Modells beurteilt werden. Es konnte gezeigt werden, dass das Quantile Regression Forest Modell geeignet ist, um die Sedimentdynamik im Rofental zu rekonstruieren. Es stellte sich weiterhin heraus, dass der Abfluss in beiden Untersuchungsgebieten den größten Einfluss auf die Sedimentdynamik hat, wobei sich die Relevanz verschiedener Variablen in beiden Untersuchungsgebieten stark unterschied. Gemessene Trübungsdaten und die modellierten Sedimentkonzentrationen korrelierten stark positiv, wobei Murgänge, Messfehler und die Anzapfung neuer Sedimentdepots zur Verschlechterung der Modellgüte führten.