Refine
Has Fulltext
- yes (83) (remove)
Year of publication
Document Type
- Doctoral Thesis (83) (remove)
Language
- English (83) (remove)
Keywords
- climate change (16)
- Klimawandel (12)
- Hochwasser (6)
- Hydrologie (6)
- Modellierung (6)
- Unsicherheit (5)
- hydrology (5)
- uncertainty (5)
- vulnerability (5)
- Klimaänderung (4)
Institute
- Institut für Umweltwissenschaften und Geographie (83) (remove)
Over the past decades, natural hazards, many of which are aggravated by climate change and reveal an increasing trend in frequency and intensity, have caused significant human and economic losses and pose a considerable obstacle to sustainable development. Hence, dedicated action toward disaster risk reduction is needed to understand the underlying drivers and create efficient risk mitigation plans. Such action is requested by the Sendai Framework for Disaster Risk Reduction 2015-2030 (SFDRR), a global agreement launched in 2015 that establishes stating priorities for action, e.g. an improved understanding of disaster risk. Turkey is one of the SFDRR contracting countries and has been severely affected by many natural hazards, in particular earthquakes and floods. However, disproportionately little is known about flood hazards and risks in Turkey. Therefore, this thesis aims to carry out a comprehensive analysis of flood hazards for the first time in Turkey from triggering drivers to impacts. It is intended to contribute to a better understanding of flood risks, improvements of flood risk mitigation and the facilitated monitoring of progress and achievements while implementing the SFDRR.
In order to investigate the occurrence and severity of flooding in comparison to other natural hazards in Turkey and provide an overview of the temporal and spatial distribution of flood losses, the Turkey Disaster Database (TABB) was examined for the years 1960-2014. The TABB database was reviewed through comparison with the Emergency Events Database (EM-DAT), the Dartmouth Flood Observatory database, the scientific literature and news archives. In addition, data on the most severe flood events between 1960 and 2014 were retrieved. These served as a basis for analyzing triggering mechanisms (i.e. atmospheric circulation and precipitation amounts) and aggravating pathways (i.e. topographic features, catchment size, land use types and soil properties). For this, a new approach was developed and the events were classified using hierarchical cluster analyses to identify the main influencing factor per event and provide additional information about the dominant flood pathways for severe floods. The main idea of the study was to start with the event impacts based on a bottom-up approach and identify the causes that created damaging events, instead of applying a model chain with long-term series as input and searching for potentially impacting events as model outcomes. However, within the frequency analysis of the flood-triggering circulation pattern types, it was discovered that events in terms of heavy precipitation were not included in the list of most severe floods, i.e. their impacts were not recorded in national and international loss databases but were mentioned in news archives and reported by the Turkish State Meteorological Service. This finding challenges bottom-up modelling approaches and underlines the urgent need for consistent event and loss documentation. Therefore, as a next step, the aim was to enhance the flood loss documentation by calibrating, validating and applying the United Nations Office for Disaster Risk Reduction (UNDRR) loss estimation method for the recent severe flood events (2015-2020). This provided, a consistent flood loss estimation model for Turkey, allowing governments to estimate losses as quickly as possible after events, e.g. to better coordinate financial aid.
This thesis reveals that, after earthquakes, floods have the second most destructive effects in Turkey in terms of human and economic impacts, with over 800 fatalities and US$ 885.7 million in economic losses between 1960 and 2020, and that more attention should be paid on the national scale. The clustering results of the dominant flood-producing mechanisms (e.g. circulation pattern types, extreme rainfall, sudden snowmelt) present crucial information regarding the source and pathway identification, which can be used as base information for hazard identification in the preliminary risk assessment process. The implementation of the UNDRR loss estimation model shows that the model with country-specific parameters, calibrated damage ratios and sufficient event documentation (i.e. physically damaged units) can be recommended in order to provide first estimates of the magnitude of direct economic losses, even shortly after events have occurred, since it performed well when estimates were compared to documented losses.
The presented results can contribute to improving the national disaster loss database in Turkey and thus enable a better monitoring of the national progress and achievements with regard to the targets stated by the SFDRR. In addition, the outcomes can be used to better characterize and classify flood events. Information on the main underlying factors and aggravating flood pathways further supports the selection of suitable risk reduction policies.
All input variables used in this thesis were obtained from publicly available data. The results are openly accessible and can be used for further research.
As an overall conclusion, it can be stated that consistent loss data collection and better event documentation should gain more attention for a reliable monitoring of the implementation of the SFDRR. Better event documentation should be established according to a globally accepted standard for disaster classification and loss estimation in Turkey. Ultimately, this enables stakeholders to create better risk mitigation actions based on clear hazard definitions, flood event classification and consistent loss estimations.
A water quality model for shallow river-lake systems and its application in river basin management
(2007)
This work documents the development and application of a new model for simulating mass transport and turnover in rivers and shallow lakes. The simulation tool called 'TRAM' is intended to complement mesoscale eco-hydrological catchment models in studies on river basin management. TRAM aims at describing the water quality of individual water bodies, using problem- and scale-adequate approaches for representing their hydrological and ecological characteristics. The need for such flexible water quality analysis and prediction tools is expected to further increase during the implementation of the European Water Framework Directive (WFD) as well as in the context of climate change research. The developed simulation tool consists of a transport and a reaction module with the latter being highly flexible with respect to the description of turnover processes in the aquatic environment. Therefore, simulation approaches of different complexity can easily be tested and model formulations can be chosen in consideration of the problem at hand, knowledge of process functioning, and data availability. Consequently, TRAM is suitable for both heavily simplified engineering applications as well as scientific ecosystem studies involving a large number of state variables, interactions, and boundary conditions. TRAM can easily be linked to catchment models off-line and it requires the use of external hydrodynamic simulation software. Parametrization of the model and visualization of simulation results are facilitated by the use of geographical information systems as well as specific pre- and post-processors. TRAM has been developed within the research project 'Management Options for the Havel River Basin' funded by the German Ministry of Education and Research. The project focused on the analysis of different options for reducing the nutrient load of surface waters. It was intended to support the implementation of the WFD in the lowland catchment of the Havel River located in North-East Germany. Within the above-mentioned study TRAM was applied with two goals in mind. In a first step, the model was used for identifying the magnitude as well as spatial and temporal patterns of nitrogen retention and sediment phosphorus release in a 100~km stretch of the highly eutrophic Lower Havel River. From the system analysis, strongly simplified conceptual approaches for modeling N-retention and P-remobilization in the studied river-lake system were obtained. In a second step, the impact of reduced external nutrient loading on the nitrogen and phosphorus concentrations of the Havel River was simulated (scenario analysis) taking into account internal retention/release. The boundary conditions for the scenario analysis such as runoff and nutrient emissions from river basins were computed by project partners using the catchment models SWIM and ArcEGMO-Urban. Based on the output of TRAM, the considered options of emission control could finally be evaluated using a site-specific assessment scale which is compatible with the requirements of the WFD. Uncertainties in the model predictions were also examined. According to simulation results, the target of the WFD -- with respect to total phosphorus concentrations in the Lower Havel River -- could be achieved in the medium-term, if the full potential for reducing point and non-point emissions was tapped. Furthermore, model results suggest that internal phosphorus loading will ease off noticeably until 2015 due to a declining pool of sedimentary mobile phosphate. Mass balance calculations revealed that the lakes of the Lower Havel River are an important nitrogen sink. This natural retention effect contributes significantly to the efforts aimed at reducing the river's nitrogen load. If a sustainable improvement of the river system's water quality is to be achieved, enhanced measures to further reduce the immissions of both phosphorus and nitrogen are required.
Precipitation forecasting has an important place in everyday life – during the day we may have tens of small talks discussing the likelihood that it will rain this evening or weekend. Should you take an umbrella for a walk? Or should you invite your friends for a barbecue? It will certainly depend on what your weather application shows.
While for years people were guided by the precipitation forecasts issued for a particular region or city several times a day, the widespread availability of weather radars allowed us to obtain forecasts at much higher spatiotemporal resolution of minutes in time and hundreds of meters in space. Hence, radar-based precipitation nowcasting, that is, very-short-range forecasting (typically up to 1–3 h), has become an essential technique, also in various professional application contexts, e.g., early warning, sewage control, or agriculture.
There are two major components comprising a system for precipitation nowcasting: radar-based precipitation estimates, and models to extrapolate that precipitation to the imminent future. While acknowledging the fundamental importance of radar-based precipitation retrieval for precipitation nowcasts, this thesis focuses only on the model development: the establishment of open and competitive benchmark models, the investigation of the potential of deep learning, and the development of procedures for nowcast errors diagnosis and isolation that can guide model development.
The present landscape of computational models for precipitation nowcasting still struggles with the availability of open software implementations that could serve as benchmarks for measuring progress. Focusing on this gap, we have developed and extensively benchmarked a stack of models based on different optical flow algorithms for the tracking step and a set of parsimonious extrapolation procedures based on image warping and advection. We demonstrate that these models provide skillful predictions comparable with or even superior to state-of-the-art operational software. We distribute the corresponding set of models as a software library, rainymotion, which is written in the Python programming language and openly available at GitHub (https://github.com/hydrogo/rainymotion). That way, the library acts as a tool for providing fast, open, and transparent solutions that could serve as a benchmark for further model development and hypothesis testing.
One of the promising directions for model development is to challenge the potential of deep learning – a subfield of machine learning that refers to artificial neural networks with deep architectures, which may consist of many computational layers. Deep learning showed promising results in many fields of computer science, such as image and speech recognition, or natural language processing, where it started to dramatically outperform reference methods.
The high benefit of using "big data" for training is among the main reasons for that. Hence, the emerging interest in deep learning in atmospheric sciences is also caused and concerted with the increasing availability of data – both observational and model-based. The large archives of weather radar data provide a solid basis for investigation of deep learning potential in precipitation nowcasting: one year of national 5-min composites for Germany comprises around 85 billion data points.
To this aim, we present RainNet, a deep convolutional neural network for radar-based precipitation nowcasting. RainNet was trained to predict continuous precipitation intensities at a lead time of 5 min, using several years of quality-controlled weather radar composites provided by the German Weather Service (DWD). That data set covers Germany with a spatial domain of 900 km x 900 km and has a resolution of 1 km in space and 5 min in time. Independent verification experiments were carried out on 11 summer precipitation events from 2016 to 2017. In these experiments, RainNet was applied recursively in order to achieve lead times of up to 1 h. In the verification experiments, trivial Eulerian persistence and a conventional model based on optical flow served as benchmarks. The latter is available in the previously developed rainymotion library.
RainNet significantly outperformed the benchmark models at all lead times up to 60 min for the routine verification metrics mean absolute error (MAE) and critical success index (CSI) at intensity thresholds of 0.125, 1, and 5 mm/h. However, rainymotion turned out to be superior in predicting the exceedance of higher intensity thresholds (here 10 and 15 mm/h). The limited ability of RainNet to predict high rainfall intensities is an undesirable property which we attribute to a high level of spatial smoothing introduced by the model. At a lead time of 5 min, an analysis of power spectral density confirmed a significant loss of spectral power at length scales of 16 km and below.
Obviously, RainNet had learned an optimal level of smoothing to produce a nowcast at 5 min lead time. In that sense, the loss of spectral power at small scales is informative, too, as it reflects the limits of predictability as a function of spatial scale. Beyond the lead time of 5 min, however, the increasing level of smoothing is a mere artifact – an analogue to numerical diffusion – that is not a property of RainNet itself but of its recursive application. In the context of early warning, the smoothing is particularly unfavorable since pronounced features of intense precipitation tend to get lost over longer lead times. Hence, we propose several options to address this issue in prospective research on model development for precipitation nowcasting, including an adjustment of the loss function for model training, model training for longer lead times, and the prediction of threshold exceedance.
The model development together with the verification experiments for both conventional and deep learning model predictions also revealed the need to better understand the source of forecast errors. Understanding the dominant sources of error in specific situations should help in guiding further model improvement. The total error of a precipitation nowcast consists of an error in the predicted location of a precipitation feature and an error in the change of precipitation intensity over lead time. So far, verification measures did not allow to isolate the location error, making it difficult to specifically improve nowcast models with regard to location prediction.
To fill this gap, we introduced a framework to directly quantify the location error. To that end, we detect and track scale-invariant precipitation features (corners) in radar images. We then consider these observed tracks as the true reference in order to evaluate the performance (or, inversely, the error) of any model that aims to predict the future location of a precipitation feature. Hence, the location error of a forecast at any lead time ahead of the forecast time corresponds to the Euclidean distance between the observed and the predicted feature location at the corresponding lead time.
Based on this framework, we carried out a benchmarking case study using one year worth of weather radar composites of the DWD. We evaluated the performance of four extrapolation models, two of which are based on the linear extrapolation of corner motion; and the remaining two are based on the Dense Inverse Search (DIS) method: motion vectors obtained from DIS are used to predict feature locations by linear and Semi-Lagrangian extrapolation.
For all competing models, the mean location error exceeds a distance of 5 km after 60 min, and 10 km after 110 min. At least 25% of all forecasts exceed an error of 5 km after 50 min, and of 10 km after 90 min. Even for the best models in our experiment, at least 5 percent of the forecasts will have a location error of more than 10 km after 45 min. When we relate such errors to application scenarios that are typically suggested for precipitation nowcasting, e.g., early warning, it becomes obvious that location errors matter: the order of magnitude of these errors is about the same as the typical extent of a convective cell. Hence, the uncertainty of precipitation nowcasts at such length scales – just as a result of locational errors – can be substantial already at lead times of less than 1 h. Being able to quantify the location error should hence guide any model development that is targeted towards its minimization. To that aim, we also consider the high potential of using deep learning architectures specific to the assimilation of sequential (track) data.
Last but not least, the thesis demonstrates the benefits of a general movement towards open science for model development in the field of precipitation nowcasting. All the presented models and frameworks are distributed as open repositories, thus enhancing transparency and reproducibility of the methodological approach. Furthermore, they are readily available to be used for further research studies, as well as for practical applications.
Climate change fundamentally transforms glaciated high-alpine regions, with well-known cryospheric and hydrological implications, such as accelerating glacier retreat, transiently increased runoff, longer snow-free periods and more frequent and intense summer rainstorms. These changes affect the availability and transport of sediments in high alpine areas by altering the interaction and intensity of different erosion processes and catchment properties.
Gaining insight into the future alterations in suspended sediment transport by high alpine streams is crucial, given its wide-ranging implications, e.g. for flood damage potential, flood hazard in downstream river reaches, hydropower production, riverine ecology and water quality. However, the current understanding of how climate change will impact suspended sediment dynamics in these high alpine regions is limited. For one, this is due to the scarcity of measurement time series that are long enough to e.g. infer trends. On the other hand, it is difficult – if not impossible – to develop process-based models, due to the complexity and multitude of processes involved in high alpine sediment dynamics. Therefore, knowledge has so far been confined to conceptual models (which do not facilitate deriving concrete timings or magnitudes for individual catchments) or qualitative estimates (‘higher export in warmer years’) that may not be able to capture decreases in sediment export. Recently, machine-learning approaches have gained in popularity for modeling sediment dynamics, since their black box nature tailors them to the problem at hand, i.e. relatively well-understood input and output data, linked by very complex processes.
Therefore, the overarching aim of this thesis is to estimate sediment export from the high alpine Ötztal valley in Tyrol, Austria, over decadal timescales in the past and future – i.e. timescales relevant to anthropogenic climate change. This is achieved by informing, extending, evaluating and applying a quantile regression forest (QRF) approach, i.e. a nonparametric, multivariate machine-learning technique based on random forest.
The first study included in this thesis aimed to understand present sediment dynamics, i.e. in the period with available measurements (up to 15 years). To inform the modeling setup for the two subsequent studies, this study identified the most important predictors, areas within the catchments and time periods. To that end, water and sediment yields from three nested gauges in the upper Ötztal, Vent, Sölden and Tumpen (98 to almost 800 km² catchment area, 930 to 3772 m a.s.l.) were analyzed for their distribution in space, their seasonality and spatial differences therein, and the relative importance of short-term events. The findings suggest that the areas situated above 2500 m a.s.l., containing glacier tongues and recently deglaciated areas, play a pivotal role in sediment generation across all sub-catchments. In contrast, precipitation events were relatively unimportant (on average, 21 % of annual sediment yield was associated to precipitation events). Thus, the second and third study focused on the Vent catchment and its sub-catchment above gauge Vernagt (11.4 and 98 km², 1891 to 3772 m a.s.l.), due to their higher share of areas above 2500 m. Additionally, they included discharge, precipitation and air temperature (as well as their antecedent conditions) as predictors.
The second study aimed to estimate sediment export since the 1960s/70s at gauges Vent and Vernagt. This was facilitated by the availability of long records of the predictors, discharge, precipitation and air temperature, and shorter records (four and 15 years) of turbidity-derived sediment concentrations at the two gauges. The third study aimed to estimate future sediment export until 2100, by applying the QRF models developed in the second study to pre-existing precipitation and temperature projections (EURO-CORDEX) and discharge projections (physically-based hydroclimatological and snow model AMUNDSEN) for the three representative concentration pathways RCP2.6, RCP4.5 and RCP8.5.
The combined results of the second and third study show overall increasing sediment export in the past and decreasing export in the future. This suggests that peak sediment is underway or has already passed – unless precipitation changes unfold differently than represented in the projections or changes in the catchment erodibility prevail and override these trends. Despite the overall future decrease, very high sediment export is possible in response to precipitation events. This two-fold development has important implications for managing sediment, flood hazard and riverine ecology.
This thesis shows that QRF can be a very useful tool to model sediment export in high-alpine areas. Several validations in the second study showed good performance of QRF and its superiority to traditional sediment rating curves – especially in periods that contained high sediment export events, which points to its ability to deal with threshold effects. A technical limitation of QRF is the inability to extrapolate beyond the range of values represented in the training data. We assessed the number and severity of such out-of-observation-range (OOOR) days in both studies, which showed that there were few OOOR days in the second study and that uncertainties associated with OOOR days were small before 2070 in the third study. As the pre-processed data and model code have been made publically available, future studies can easily test further approaches or apply QRF to further catchments.
Flood polders are part of the flood risk management strategy for many lowland rivers. They are used for the controlled storage of flood water so as to lower peak discharges of large floods. Consequently, the flood hazard in adjacent and downstream river reaches is decreased in the case of flood polder utilisation. Flood polders are usually dry storage reservoirs that are typically characterised by agricultural activities or other land use of low economic and ecological vulnerability. The objective of this thesis is to analyse hydraulic, environmental and economic impacts of the utilisation of flood polders in order to draw conclusions for their management. For this purpose, hydrodynamic and water quality modelling as well as an economic vulnerability assessment are employed in two study areas on the Middle Elbe River in Germany. One study area is an existing flood polder system on the tributary Havel, which was put into operation during the Elbe flood in summer 2002. The second study area is a planned flood polder, which is currently in the early planning stages. Furthermore, numerical models of different spatial dimensionality, ranging from zero- to two-dimensional, are applied in order to evaluate their suitability for hydrodynamic and water quality simulations of flood polders in regard to performance and modelling effort. The thesis concludes with overall recommendations on the management of flood polders, including operational schemes and land use. In view of future changes in flood frequency and further increasing values of private and public assets in flood-prone areas, flood polders may be effective and flexible technical flood protection measures that contribute to a successful flood risk management for large lowland rivers.
River reaches protected by dikes exhibit high damage potential due to strong value accumulation in the hinterland areas. While providing an efficient protection against low magnitude flood events, dikes may fail under the load of extreme water levels and long flood durations. Hazard and risk assessments for river reaches protected by dikes have not adequately considered the fluvial inundation processes up to now. Particularly, the processes of dike failures and their influence on the hinterland inundation and flood wave propagation lack comprehensive consideration. This study focuses on the development and application of a new modelling system which allows a comprehensive flood hazard assessment along diked river reaches under consideration of dike failures. The proposed Inundation Hazard Assessment Model (IHAM) represents a hybrid probabilistic-deterministic model. It comprises three models interactively coupled at runtime. These are: (1) 1D unsteady hydrodynamic model of river channel and floodplain flow between dikes, (2) probabilistic dike breach model which determines possible dike breach locations, breach widths and breach outflow discharges, and (3) 2D raster-based diffusion wave storage cell model of the hinterland areas behind the dikes. Due to the unsteady nature of the 1D and 2D coupled models, the dependence between hydraulic load at various locations along the reach is explicitly considered. The probabilistic dike breach model describes dike failures due to three failure mechanisms: overtopping, piping and slope instability caused by the seepage flow through the dike core (micro-instability). The 2D storage cell model driven by the breach outflow boundary conditions computes an extended spectrum of flood intensity indicators such as water depth, flow velocity, impulse, inundation duration and rate of water rise. IHAM is embedded in a Monte Carlo simulation in order to account for the natural variability of the flood generation processes reflected in the form of input hydrographs and for the randomness of dike failures given by breach locations, times and widths. The model was developed and tested on a ca. 91 km heavily diked river reach on the German part of the Elbe River between gauges Torgau and Vockerode. The reach is characterised by low slope and fairly flat extended hinterland areas. The scenario calculations for the developed synthetic input hydrographs for the main river and tributary were carried out for floods with return periods of T = 100, 200, 500, 1000 a. Based on the modelling results, probabilistic dike hazard maps could be generated that indicate the failure probability of each discretised dike section for every scenario magnitude. In the disaggregated display mode, the dike hazard maps indicate the failure probabilities for each considered breach mechanism. Besides the binary inundation patterns that indicate the probability of raster cells being inundated, IHAM generates probabilistic flood hazard maps. These maps display spatial patterns of the considered flood intensity indicators and their associated return periods. Finally, scenarios of polder deployment for the extreme floods with T = 200, 500, 1000 were simulated with IHAM. The developed IHAM simulation system represents a new scientific tool for studying fluvial inundation dynamics under extreme conditions incorporating effects of technical flood protection measures. With its major outputs in form of novel probabilistic inundation and dike hazard maps, the IHAM system has a high practical value for decision support in flood management.
On a planetary scale human populations need to adapt to both socio-economic and environmental problems amidst rapid global change. This holds true for coupled human-environment (socio-ecological) systems in rural and urban settings alike. Two examples are drylands and urban coasts. Such socio-ecological systems have a global distribution. Therefore, advancing the knowledge base for identifying socio-ecological adaptation needs with local vulnerability assessments alone is infeasible: The systems cover vast areas, while funding, time, and human resources for local assessments are limited. They are lacking in low an middle-income countries (LICs and MICs) in particular.
But places in a specific socio-ecological system are not only unique and complex – they also exhibit similarities. A global patchwork of local rural drylands vulnerability assessments of human populations to socio-ecological and environmental problems has already been reduced to a limited number of problem structures, which typically cause vulnerability. However, the question arises whether this is also possible in urban socio-ecological systems. The question also arises whether these typologies provide added value in research beyond global change. Finally, the methodology employed for drylands needs refining and standardizing to increase its uptake in the scientific community. In this dissertation, I set out to fill these three gaps in research.
The geographical focus in my dissertation is on LICs and MICs, which generally have lower capacities to adapt, and greater adaptation needs, regarding rapid global change. Using a spatially explicit indicator-based methodology, I combine geospatial and clustering methods to identify typical configurations of key factors in case studies causing vulnerability to human populations in two specific socio-ecological systems. Then I use statistical and analytical methods to interpret and appraise both the typical configurations and the global typologies they constitute.
First, I improve the indicator-based methodology and then reanalyze typical global problem structures of socio-ecological drylands vulnerability with seven indicator datasets. The reanalysis confirms the key tenets and produces a more realistic and nuanced typology of eight spatially explicit problem structures, or vulnerability profiles: Two new profiles with typically high natural resource endowment emerge, in which overpopulation has led to medium or high soil erosion. Second, I determine whether the new drylands typology and its socio-ecological vulnerability concept advance a thematically linked scientific debate in human security studies: what drives violent conflict in drylands? The typology is a much better predictor for conflict distribution and incidence in drylands than regression models typically used in peace research. Third, I analyze global problem structures typically causing vulnerability in an urban socio-ecological system - the rapidly urbanizing coastal fringe (RUCF) – with eleven indicator datasets. The RUCF also shows a robust typology, and its seven profiles show huge asymmetries in vulnerability and adaptive capacity. The fastest population increase, lowest income, most ineffective governments, most prevalent poverty, and lowest adaptive capacity are all typically stacked in two profiles in LICs. This shows that beyond local case studies tropical cyclones and/or coastal flooding are neither stalling rapid population growth, nor urban expansion, in the RUCF. I propose entry points for scaling up successful vulnerability reduction strategies in coastal cities within the same vulnerability profile.
This dissertation shows that patchworks of local vulnerability assessments can be generalized to structure global socio-ecological vulnerabilities in both rural and urban socio-ecological systems according to typical problems. In terms of climate-related extreme events in the RUCF, conflicting problem structures and means to deal with them are threatening to widen the development gap between LICs and high-income countries unless successful vulnerability reduction measures are comprehensively scaled up. The explanatory power for human security in drylands warrants further applications of the methodology beyond global environmental change research in the future. Thus, analyzing spatially explicit global typologies of socio-ecological vulnerability is a useful complement to local assessments: The typologies provide entry points for where to consider which generic measures to reduce typical problem structures – including the countless places without local assessments. This can save limited time and financial resources for adaptation under rapid global change.
Largescale patterns of global land use change are very frequently accompanied by natural habitat loss. To assess the consequences of habitat loss for the remaining natural and semi-natural biotopes, inclusion of cumulative effects at the landscape level is required. The interdisciplinary concept of vulnerability constitutes an appropriate assessment framework at the landscape level, though with few examples of its application for ecological assessments. A comprehensive biotope vulnerability analysis allows identification of areas most affected by landscape change and at the same time with the lowest chances of regeneration.
To this end, a series of ecological indicators were reviewed and developed. They measured spatial attributes of individual biotopes as well as some ecological and conservation characteristics of the respective resident species community. The final vulnerability index combined seven largely independent indicators, which covered exposure, sensitivity and adaptive capacity of biotopes to landscape changes. Results for biotope vulnerability were provided at the regional level. This seems to be an appropriate extent with relevance for spatial planning and designing the distribution of nature reserves.
Using the vulnerability scores calculated for the German federal state of Brandenburg, hot spots and clusters within and across the distinguished types of biotopes were analysed. Biotope types with high dependence on water availability, as well as biotopes of the open landscape containing woody plants (e.g., orchard meadows) are particularly vulnerable to landscape changes. In contrast, the majority of forest biotopes appear to be less vulnerable. Despite the appeal of such generalised statements for some biotope types, the distribution of values suggests that conservation measures for the majority of biotopes should be designed specifically for individual sites. Taken together, size, shape and spatial context of individual biotopes often had a dominant influence on the vulnerability score.
The implementation of biotope vulnerability analysis at the regional level indicated that large biotope datasets can be evaluated with high level of detail using geoinformatics. Drawing on previous work in landscape spatial analysis, the reproducible approach relies on transparent calculations of quantitative and qualitative indicators. At the same time, it provides a synoptic overview and information on the individual biotopes. It is expected to be most useful for nature conservation in combination with an understanding of population, species, and community attributes known for specific sites. The biotope vulnerability analysis facilitates a foresighted assessment of different land uses, aiding in identifying options to slow habitat loss to sustainable levels. It can also be incorporated into planning of restoration measures, guiding efforts to remedy ecological damage. Restoration of any specific site could yield synergies with the conservation objectives of other sites, through enhancing the habitat network or buffering against future landscape change.
Biotope vulnerability analysis could be developed in line with other important ecological concepts, such as resilience and adaptability, further extending the broad thematic scope of the vulnerability concept. Vulnerability can increasingly serve as a common framework for the interdisciplinary research necessary to solve major societal challenges.
Biogeochemical analyses of lacustrine environments are well-established methods that allow exploring and understanding complex systems in the lake ecosystem. However, most were conducted in temperate lakes controlled by entirely different physical conditions than in tropical climates. The most important difference between the temperate and tropical lakes is lacking seasonal temperature fluctuations in the latter, which leads to a stable temperature gradient in the water column. Thus, the water column in tropical latitudes generally is void of perturbations that can be seen in their temperate counterparts. Permanent stratification in the water column provides optimal conditions for intact sedimentation. The geochemical processes in the water column and the weathering process in the distinct lithology in the catchment leads to the different biogeochemical characteristic in the sediment. Conducting a biogeochemical study in this lake sediment, especially in the Sediment Water Interface (SWI) helps reveal the sedimentation and diagenetic process records influenced by the internal or external loading. Lake Sentani, the study area, is one of the thousands of lakes in Indonesia and located in the Papua province. This tropical lake has a unique feature, as it consists of four interconnected sub-basins with different water depths. More importantly, its catchment is comprised of various different lithologies. Hence, its lithological characteristics are highly diverse, and range from mafic and ultramafic rocks to clastic sediment and carbonates. Each sub-basin receives a distinct sediment input. Equally important, besides the natural loading, Lake Sentani is also influenced by anthropogenic input. Previous studies have elaborated that there is an increase in population growth rate around the lake which has direct consequences on eutrophication. Considering these factors, the government of The Republic of Indonesia put Lake Sentani on the list of national priority lakes for restoration. This thesis aims to develop a fundamental understanding of Lake Sentani's sedimentary geochemistry and geomicrobiology with a special focus on the effects of different lithologies and anthropogenic pressures in the catchment area. We conducted geochemical and geomicrobiology research on Lake Sentani to meet this objective. We investigated geochemical characteristics in the water column, porewater, and sediment core of the four sub-basins. Additional to direct investigations of the lake itself, we also studied the sediments in the tributary rivers, of which some are ephemeral, as well as the river mouths, as connections between riverine and the lacustrine habitat. The thesis is composed of three main publications about Lake Sentani and supported by several publications that focus on other tropical lakes in Indonesia. The first main publication investigates the geochemical characterization of the water column, porewater, and surface sediment (upper 40-50 cm) from the center of the four sub-basins. It reveals that besides catchment lithology, the water column heavily influences the geochemical characteristics in the lake sediments and their porewater. The findings indicate that water column stratification has a strong influence on overall chemistry. The four sub-basins are very different with regard to their water column chemistry. Based on the physicochemical profiles, especially dissolved oxygen, one sub-basin is oxygenated, one intermediate i.e. just reaches oxygen depletion at the sediment-water interface, and two sub-basins are fully meromictic. However, all four sub-basins share the same surface water chemistry. The structure of the water column creates differences on the patterns of anions and cations in the porewater. Likewise, the distinct differences in geochemical composition between the sub-basins show that the lithology in the catchment affects the geochemical characteristic in the sediment. Overall, water column stratification and particularly bottom water oxygenation strongly influence the overall elemental composition of the sediment and porewater composition. The second publication reveals differences in surface sediment composition between habitats, influenced by lithological variations in the catchment area. The macro-element distribution shows that the geochemical characteristics between habitats are different. Furthermore, the geochemical composition also indicates a distinct distribution between the sub-basins. The geochemical composition of the eastern sub-basin suggests that lithogenic elements are more dominant than authigenic elements. This is also supported by sulfide speciation, particle distribution, and smear slide data. The third publication is a geomicrobiological study of the surface sediment. We compare the geochemical composition of the surface sediment and its microbiological composition and compare the different signals. Next Generation Sequencing (NGS) of the 16S rRNA gene was applied to determine the microbial community composition of the surface sediment from a great number of locations. We use a large number of sampling sites in all four sub-basins as well as in the rivers and river mouths to illustrate the links between the river, the river mouth, and the lake. Rigorous assessment of microbial communities across the diverse Lake Sentani habitats allowed us to study some of these links and report novel findings on microbial patterns in such ecosystems. The main result of the Principal Coordinates Analysis (PCoA) based on microbial community composition highlighted some commonalities but also differences between the microbial community analysis and the geochemical data. The microbial community in rivers, river mouths and sub-basins is strongly influenced by anthropogenic input from the catchment area. Generally, Bacteroidetes and Firmicutes could be an indicator for river sediments. The microbial community in the river is directly influenced by anthropogenic pressure and is markedly different from the lake sediment. Meanwhile, the microbial community in the lake sediment reflects the anoxic environment, which is prevalent across the lake in all sediments below a few mm burial depth. The lake sediments harbour abundant sulfate reducers and methanogens. The microbial communities in sediments from river mouths are influenced by both rivers and lake ecosystems. This study provides valuable information to understand the basic processes that control biogeochemical cycling in Lake Sentani. Our findings are critical for lake managers to accurately assess the uncertainties of the changing environmental conditions related to the anthropogenic pressure in the catchment area. Lake Sentani is a unique study site directly influenced by the different geology across the watershed and morphometry of the four studied basins. As a result of these factors, there are distinct geochemical differences between the habitats (river, river mouth, lake) and the four sub-basins. In addition to geochemistry, microbial community composition also shows differences between habitats, although there are no obvious differences between the four sub-basins. However, unlike sediment geochemistry, microbial community composition is impacted by human activities. Therefore, this thesis will provide crucial baseline data for future lake management.
Calibration of the global hydrological model WGHM with water mass variations from GRACE gravity data
(2010)
Since the start-up of the GRACE (Gravity Recovery And Climate Experiment) mission in 2002 time dependent global maps of the Earth's gravity field are available to study geophysical and climatologically-driven mass redistributions on the Earth's surface. In particular, GRACE observations of total water storage changes (TWSV) provide a comprehensive data set for analysing the water cycle on large scales. Therefore they are invaluable for validation and calibration of large-scale hydrological models as the WaterGAP Global Hydrology Model (WGHM) which simulates the continental water cycle including its most important components, such as soil, snow, canopy, surface- and groundwater. Hitherto, WGHM exhibits significant differences to GRACE, especially for the seasonal amplitude of TWSV. The need for a validation of hydrological models is further highlighted by large differences between several global models, e.g. WGHM, the Global Land Data Assimilation System (GLDAS) and the Land Dynamics model (LaD). For this purpose, GRACE links geodetic and hydrological research aspects. This link demands the development of adequate data integration methods on both sides, forming the main objectives of this work. They include the derivation of accurate GRACE-based water storage changes, the development of strategies to integrate GRACE data into a global hydrological model as well as a calibration method, followed by the re-calibration of WGHM in order to analyse process and model responses. To achieve these aims, GRACE filter tools for the derivation of regionally averaged TWSV were evaluated for specific river basins. Here, a decorrelation filter using GRACE orbits for its design is most efficient among the tested methods. Consistency in data and equal spatial resolution between observed and simulated TWSV were realised by the inclusion of all most important hydrological processes and an equal filtering of both data sets. Appropriate calibration parameters were derived by a WGHM sensitivity analysis against TWSV. Finally, a multi-objective calibration framework was developed to constrain model predictions by both river discharge and GRACE TWSV, realised with a respective evolutionary method, the ε-Non-dominated-Sorting-Genetic-Algorithm-II (ε-NSGAII). Model calibration was done for the 28 largest river basins worldwide and for most of them improved simulation results were achieved with regard to both objectives. From the multi-objective approach more reliable and consistent simulations of TWSV within the continental water cycle were gained and possible model structure errors or mis-modelled processes for specific river basins detected. For tropical regions as such, the seasonal amplitude of water mass variations has increased. The findings lead to an improved understanding of hydrological processes and their representation in the global model. Finally, the robustness of the results is analysed with respect to GRACE and runoff measurement errors. As a main conclusion obtained from the results, not only soil water and snow storage but also groundwater and surface water storage have to be included in the comparison of the modelled and GRACE-derived total water budged data. Regarding model calibration, the regional varying distribution of parameter sensitivity suggests to tune only parameter of important processes within each region. Furthermore, observations of single storage components beside runoff are necessary to improve signal amplitudes and timing of simulated TWSV as well as to evaluate them with higher accuracy. The results of this work highlight the valuable nature of GRACE data when merged into large-scale hydrological modelling and depict methods to improve large-scale hydrological models.