Refine
Has Fulltext
- yes (201) (remove)
Year of publication
Document Type
- Doctoral Thesis (83)
- Postprint (67)
- Article (35)
- Monograph/Edited Volume (9)
- Master's Thesis (3)
- Habilitation Thesis (2)
- Bachelor Thesis (1)
- Conference Proceeding (1)
Language
- English (201) (remove)
Keywords
- Curriculum Framework (34)
- European values education (34)
- Europäische Werteerziehung (34)
- Lehrevaluation (34)
- Studierendenaustausch (34)
- Unterrichtseinheiten (34)
- curriculum framework (34)
- lesson evaluation (34)
- student exchange (34)
- teaching units (34)
Institute
- Institut für Umweltwissenschaften und Geographie (201) (remove)
High-mountain regions provide valuable ecosystem services, including food, water, and energy production, to more than 900 million people worldwide. Projections hold, that this population number will rapidly increase in the next decades, accompanied by a continued urbanisation of cities located in mountain valleys. One of the manifestations of this ongoing socio-economic change of mountain societies is a rise in settlement areas and transportation infrastructure while an increased power need fuels the construction of hydropower plants along rivers in the high-mountain regions of the world. However, physical processes governing the cryosphere of these regions are highly sensitive to changes in climate and a global warming will likely alter the conditions in the headwaters of high-mountain rivers. One of the potential implications of this change is an increase in frequency and magnitude of outburst floods – highly dynamic flows capable of carrying large amounts of water and sediments. Sudden outbursts from lakes formed behind natural dams are complex geomorphological processes and are often part of a hazard cascade. In contrast to other types of natural hazards in high-alpine areas, for example landslides or avalanches, outburst floods are highly infrequent. Therefore, observations and data describing for example the mode of outburst or the hydraulic properties of the downstream propagating flow are very limited, which is a major challenge in contemporary (glacial) lake outburst flood research. Although glacial lake outburst floods (GLOFs) and landslide-dammed lake outburst floods (LLOFs) are rare, a number of documented events caused high fatality counts and damage. The highest documented losses due to outburst floods since the start of the 20th century were induced by only a few high-discharge events. Thus, outburst floods can be a significant hazard to downvalley communities and infrastructure in high-mountain regions worldwide.
This thesis focuses on the Greater Himalayan region, a vast mountain belt stretching across 0.89 million km2. Although potentially hundreds of outburst floods have occurred there since the beginning of the 20th century, data on these events is still scarce. Projections of cryospheric change, including glacier-mass wastage and permafrost degradation, will likely result in an overall increase of the water volume stored in meltwater lakes as well as the destabilisation of mountain slopes in the Greater Himalayan region. Thus, the potential for outburst floods to affect the increasingly more densely populated valleys of this mountain belt is also likely to increase in the future. A prime example of one of these valleys is the Pokhara valley in Nepal, which is drained by the Seti Khola, a river crossing one of the steepest topographic gradients in the Himalayas. This valley is also home to Nepal’s second largest, rapidly growing city, Pokhara, which currently has a population of more than half a million people – some of which live in informal settlements within the floodplain of the Seti Khola. Although there is ample evidence for past outburst floods along this river in recent and historic times, these events have hardly been quantified.
The main motivation of my thesis is to address the data scarcity on past and potential future outburst floods in the Greater Himalayan region, both at a regional and at a local scale. For the former, I compiled an inventory of >3,000 moraine-dammed lakes, of which about 1% had a documented sudden failure in the past four decades. I used this data to test whether a number of predictors that have been widely applied in previous GLOF assessments are statistically relevant when estimating past GLOF susceptibility. For this, I set up four Bayesian multi-level logistic regression models, in which I explored the credibility of the predictors lake area, lake-area dynamics, lake elevation, parent-glacier-mass balance, and monsoonality. By using a hierarchical approach consisting of two levels, this probabilistic framework also allowed for spatial variability on GLOF susceptibility across the vast study area, which until now had not been considered in studies of this scale. The model results suggest that in the Nyainqentanglha and Eastern Himalayas – regions with strong negative glacier-mass balances – lakes have been more prone to release GLOFs than in regions with less negative or even stable glacier-mass balances. Similarly, larger lakes in larger catchments had, on average, a higher probability to have had a GLOF in the past four decades. Yet, monsoonality, lake elevation, and lake-area dynamics were more ambiguous. This challenges the credibility of a lake’s rapid growth in surface area as an indicator of a pending outburst; a metric that has been applied to regional GLOF assessments worldwide.
At a local scale, my thesis aims to overcome data scarcity concerning the flow characteristics of the catastrophic May 2012 flood along the Seti Khola, which caused 72 fatalities, as well as potentially much larger predecessors, which deposited >1 km³ of sediment in the Pokhara valley between the 12th and 14th century CE. To reconstruct peak discharges, flow depths, and flow velocities of the 2012 flood, I mapped the extents of flood sediments from RapidEye satellite imagery and used these as a proxy for inundation limits. To constrain the latter for the Mediaeval events, I utilised outcrops of slackwater deposits in the fills of tributary valleys. Using steady-state hydrodynamic modelling for a wide range of plausible scenarios, from meteorological (1,000 m³ s-1) to cataclysmic outburst floods (600,000 m³ s-1), I assessed the likely initial discharges of the recent and the Mediaeval floods based on the lowest mismatch between sedimentary evidence and simulated flood limits. One-dimensional HEC-RAS simulations suggest, that the 2012 flood most likely had a peak discharge of 3,700 m³ s-1 in the upper Seti Khola and attenuated to 500 m³ s-1 when arriving in Pokhara’s suburbs some 15 km downstream.
Simulations of flow in two-dimensions with orders of magnitude higher peak discharges in ANUGA show extensive backwater effects in the main tributary valleys. These backwater effects match the locations of slackwater deposits and, hence, attest for the flood character of Mediaeval sediment pulses. This thesis provides first quantitative proof for the hypothesis, that the latter were linked to earthquake-triggered outbursts of large former lakes in the headwaters of the Seti Khola – producing floods with peak discharges of >50,000 m³ s-1.
Building on this improved understanding of past floods along the Seti Khola, my thesis continues with an analysis of the impacts of potential future outburst floods on land cover, including built-up areas and infrastructure mapped from high-resolution satellite and OpenStreetMap data. HEC-RAS simulations of ten flood scenarios, with peak discharges ranging from 1,000 to 10,000 m³ s-1, show that the relative inundation hazard is highest in Pokhara’s north-western suburbs. There, the potential effects of hydraulic ponding upstream of narrow gorges might locally sustain higher flow depths. Yet, along this reach, informal settlements and gravel mining activities are close to the active channel. By tracing the construction dynamics in two of these potentially affected informal settlements on multi-temporal RapidEye, PlanetScope, and Google Earth imagery, I found that exposure increased locally between three- to twentyfold in just over a decade (2008 to 2021).
In conclusion, this thesis provides new quantitative insights into the past controls on the susceptibility of glacial lakes to sudden outburst at a regional scale and the flow dynamics of propagating flood waves released by past events at a local scale, which can aid future hazard assessments on transient scales in the Greater Himalayan region. My subsequent exploration of the impacts of potential future outburst floods to exposed infrastructure and (informal) settlements might provide valuable inputs to anticipatory assessments of multiple risks in the Pokhara valley.
Pokhara (ca. 850 m a.s.l.), Nepal's second-largest city, lies at the foot of the Higher Himalayas and has more than tripled its population in the past 3 decades. Construction materials are in high demand in rapidly expanding built-up areas, and several informal settlements cater to unregulated sand and gravel mining in the Pokhara Valley's main river, the Seti Khola. This river is fed by the Sabche glacier below Annapurna III (7555 m a.s.l.), some 35 km upstream of the city, and traverses one of the steepest topographic gradients in the Himalayas. In May 2012 a sudden flood caused >70 fatalities and intense damage along this river and rekindled concerns about flood risk management. We estimate the flow dynamics and inundation depths of flood scenarios using the hydrodynamic model HEC-RAS (Hydrologic Engineering Center’s River Analysis System). We simulate the potential impacts of peak discharges from 1000 to 10 000 m3 s−1 on land cover based on high-resolution Maxar satellite imagery and OpenStreetMap data (buildings and road network). We also trace the dynamics of two informal settlements near Kaseri and Yamdi with high potential flood impact from RapidEye, PlanetScope, and Google Earth imagery of the past 2 decades. Our hydrodynamic simulations highlight several sites of potential hydraulic ponding that would largely affect these informal settlements and sites of sand and gravel mining. These built-up areas grew between 3- and 20-fold, thus likely raising local flood exposure well beyond changes in flood hazard. Besides these drastic local changes, about 1 % of Pokhara's built-up urban area and essential rural road network is in the highest-hazard zones highlighted by our flood simulations. Our results stress the need to adapt early-warning strategies for locally differing hydrological and geomorphic conditions in this rapidly growing urban watershed.
Thousands of glacier lakes have been forming behind natural dams in high mountains following glacier retreat since the early 20th century. Some of these lakes abruptly released pulses of water and sediment with disastrous downstream consequences. Yet it remains unclear whether the reported rise of these glacier lake outburst floods (GLOFs) has been fueled by a warming atmosphere and enhanced meltwater production, or simply a growing research effort. Here we estimate trends and biases in GLOF reporting based on the largest global catalog of 1,997 dated glacier-related floods in six major mountain ranges from 1901 to 2017. We find that the positive trend in the number of reported GLOFs has decayed distinctly after a break in the 1970s, coinciding with independently detected trend changes in annual air temperatures and in the annual number of field-based glacier surveys (a proxy of scientific reporting). We observe that GLOF reports and glacier surveys decelerated, while temperature rise accelerated in the past five decades. Enhanced warming alone can thus hardly explain the annual number of reported GLOFs, suggesting that temperature-driven glacier lake formation, growth, and failure are weakly coupled, or that outbursts have been overlooked. Indeed, our analysis emphasizes a distinct geographic and temporal bias in GLOF reporting, and we project that between two to four out of five GLOFs on average might have gone unnoticed in the early to mid-20th century. We recommend that such biases should be considered, or better corrected for, when attributing the frequency of reported GLOFs to atmospheric warming.
The Greenland Ice Sheet is the second-largest mass of ice on Earth. Being almost 2000 km long, more than 700 km wide, and more than 3 km thick at the summit, it holds enough ice to raise global sea levels by 7m if melted completely. Despite its massive size, it is particularly vulnerable to anthropogenic climate change: temperatures over the Greenland Ice Sheet have increased by more than 2.7◦C in the past 30 years, twice as much as the global mean temperature. Consequently, the ice sheet has been significantly losing mass since the 1980s and the rate of loss has increased sixfold since then. Moreover, it is one of the potential tipping elements of the Earth System, which might undergo irreversible change once a warming threshold is exceeded. This thesis aims at extending the understanding of the resilience of the Greenland Ice Sheet against global warming by analyzing processes and feedbacks relevant to its centennial to multi-millennial stability using ice sheet modeling.
One of these feedbacks, the melt-elevation-feedback is driven by the temperature rise with decreasing altitudes: As the ice sheet melts, its thickness and surface elevation decrease, exposing the ice surface to warmer air and thus increasing the melt rates even further. The glacial isostatic adjustment (GIA) can partly mitigate this melt-elevation feedback as the bedrock lifts in response to an ice load decrease, forming the negative GIA feedback. In my thesis, I show that the interaction between these two competing feedbacks can lead to qualitatively different dynamical responses of the Greenland Ice Sheet to warming – from permanent loss to incomplete recovery, depending on the feedback parameters. My research shows that the interaction of those feedbacks can initiate self-sustained oscillations of the ice volume while the climate forcing remains constant.
Furthermore, the increased surface melt changes the optical properties of the snow or ice surface, e.g. by lowering their albedo, which in turn enhances melt rates – a process known as the melt-albedo feedback. Process-based ice sheet models often neglect this melt-albedo feedback. To close this gap, I implemented a simplified version of the diurnal Energy Balance Model, a computationally efficient approach that can capture the first-order effects of the melt-albedo feedback, into the Parallel Ice Sheet Model (PISM). Using the coupled model, I show in warming experiments that the melt-albedo feedback almost doubles the ice loss until the year 2300 under the low greenhouse gas emission scenario RCP2.6, compared to simulations where the melt-albedo feedback is neglected,
and adds up to 58% additional ice loss under the high emission scenario RCP8.5. Moreover, I find that the melt-albedo feedback dominates the ice loss until 2300, compared to the melt-elevation feedback.
Another process that could influence the resilience of the Greenland Ice Sheet is the warming induced softening of the ice and the resulting increase in flow. In my thesis, I show with PISM how the uncertainty in Glen’s flow law impacts the simulated response to warming. In a flow line setup at fixed climatic mass balance, the uncertainty in flow parameters leads to a range of ice loss comparable to the range caused by different warming levels.
While I focus on fundamental processes, feedbacks, and their interactions in the first three projects of my thesis, I also explore the impact of specific climate scenarios on the sea level rise contribution of the Greenland Ice Sheet. To increase the carbon budget flexibility, some warming scenarios – while still staying within the limits of the Paris Agreement – include a temporal overshoot of global warming. I show that an overshoot by 0.4◦C increases the short-term and long-term ice loss from Greenland by several centimeters. The long-term increase is driven by the warming at high latitudes, which persists even when global warming is reversed. This leads to a substantial long-term commitment of the sea level rise contribution from the Greenland Ice Sheet.
Overall, in my thesis I show that the melt-albedo feedback is most relevant for the ice loss of the Greenland Ice Sheet on centennial timescales. In contrast, the melt-elevation feedback and its interplay with the GIA feedback become increasingly relevant on millennial timescales. All of these influence the resilience of the Greenland Ice Sheet against global warming, in the near future and on the long term.
Global heat adaptation among urban populations and its evolution under different climate futures
(2022)
Heat and increasing ambient temperatures under climate change represent a serious threat to human health in cities. Heat exposure has been studied extensively at a global scale. Studies comparing a defined temperature threshold with the future daytime temperature during a certain period of time, had concluded an increase in threat to human health. Such findings however do not explicitly account for possible changes in future human heat adaptation and might even overestimate heat exposure. Thus, heat adaptation and its development is still unclear. Human heat adaptation refers to the local temperature to which populations are adjusted to. It can be inferred from the lowest point of the U- or V-shaped heat-mortality relationship (HMR), the Minimum Mortality Temperature (MMT). While epidemiological studies inform on the MMT at the city scale for case studies, a general model applicable at the global scale to infer on temporal change in MMTs had not yet been realised. The conventional approach depends on data availability, their robustness, and on the access to daily mortality records at the city scale. Thorough analysis however must account for future changes in the MMT as heat adaptation happens partially passively. Human heat adaptation consists of two aspects: (1) the intensity of the heat hazard that is still tolerated by human populations, meaning the heat burden they can bear and (2) the wealth-induced technological, social and behavioural measures that can be employed to avoid heat exposure. The objective of this thesis is to investigate and quantify human heat adaptation among urban populations at a global scale under the current climate and to project future adaptation under climate change until the end of the century. To date, this has not yet been accomplished. The evaluation of global heat adaptation among urban populations and its evolution under climate change comprises three levels of analysis. First, using the example of Germany, the MMT is calculated at the city level by applying the conventional method. Second, this thesis compiles a data pool of 400 urban MMTs to develop and train a new model capable of estimating MMTs on the basis of physical and socio-economic city characteristics using multivariate non-linear multivariate regression. The MMT is successfully described as a function of the current climate, the topography and the socio-economic standard, independently of daily mortality data for cities around the world. The city-specific MMT estimates represents a measure of human heat adaptation among the urban population. In a final third analysis, the model to derive human heat adaptation was adjusted to be driven by projected climate and socio-economic variables for the future. This allowed for estimation of the MMT and its change for 3 820 cities worldwide for different combinations of climate trajectories and socio-economic pathways until 2100. The knowledge on the evolution of heat adaptation in the future is a novelty as mostly heat exposure and its future development had been researched. In this work, changes in heat adaptation and exposure were analysed jointly. A wide range of possible health-related outcomes up to 2100 was the result, of which two scenarios with the highest socio-economic developments but opposing strong warming levels were highlighted for comparison. Strong economic growth based upon fossil fuel exploitation is associated with a high gain in heat adaptation, but may not be able to compensate for the associated negative health effects due to increased heat exposure in 30% to 40% of the cities investigated caused by severe climate change. A slightly less strong, but sustainable growth brings moderate gains in heat adaptation but a lower heat exposure and exposure reductions in 80% to 84% of the cities in terms of frequency (number of days exceeding the MMT) and intensity (magnitude of the MMT exceedance) due to a milder global warming. Choosing a 2 ° C compatible development by 2100 would therefore lower the risk of heat-related mortality at the end of the century. In summary, this thesis makes diverse and multidisciplinary contributions to a deeper understanding of human adaptation to heat under the current and the future climate. It is one of the first studies to carry out a systematic and statistical analysis of urban characteristics which are useful as MMT drivers to establish a generalised model of human heat adaptation, applicable at the global level. A broad range of possible heat-related health options for various future scenarios was shown for the first time. This work is of relevance for the assessment of heat-health impacts in regions where mortality data are not accessible or missing. The results are useful for health care planning at the meso- and macro-level and to urban- and climate change adaptation planning. Lastly, beyond having met the posed objective, this thesis advances research towards a global future impact assessment of heat on human health by providing an alternative method of MMT estimation, that is spatially and temporally flexible in its application.
The estimation of financial losses is an integral part of flood risk assessment. The application of existing flood loss models on locations or events different from the ones used to train the models has led to low performance, showing that characteristics of the flood damaging process have not been sufficiently well represented yet. To improve flood loss model transferability, I explore various model structures aiming at incorporating different (inland water) flood types and pathways. That is based on a large survey dataset of approximately 6000 flood-affected households which addresses several aspects of the flood event, not only the hazard characteristics but also information on the affected building, socioeconomic factors, the household's preparedness level, early warning, and impacts. Moreover, the dataset reports the coincidence of different flood pathways. Whilst flood types are a classification of flood events reflecting their generating process (e.g. fluvial, pluvial), flood pathways represent the route the water takes to reach the receptors (e.g. buildings). In this work, the following flood pathways are considered: levee breaches, river floods, surface water floods, and groundwater floods.
The coincidence of several hazard processes at the same time and place characterises a compound event. In fact, many flood events develop through several pathways, such as the ones addressed in the survey dataset used. Earlier loss models, although developed with one or multiple predictor variables, commonly use loss data from a single flood event which is attributed to a single flood type, disregarding specific flood pathways or the coincidence of multiple pathways. This gap is addressed by this thesis through the following research questions: 1. In which aspects do flood pathways of the same (compound inland) flood event differ? 2. How much do factors which contribute to the overall flood loss in a building differ in various settings, specifically across different flood pathways? 3. How well can Bayesian loss models learn from different settings? 4. Do compound, that is, coinciding flood pathways result in higher losses than a single pathway, and what does the outcome imply for future loss modelling?
Statistical analysis has found that households affected by different flood pathways also show, in general, differing characteristics of the affected building, preparedness, and early warning, besides the hazard characteristics. Forecasting and early warning capabilities and the preparedness of the population are dominated by the general flood type, but characteristics of the hazard at the object-level, the impacts, and the recovery are more related to specific flood pathways, indicating that risk communication and loss models could benefit from the inclusion of flood-pathway-specific information.
For the development of the loss model, several potentially relevant predictors are analysed: water depth, duration, velocity, contamination, early warning lead time, perceived knowledge about self-protection, warning information, warning source, gap between warning and action, emergency measures, implementation of property-level precautionary measures (PLPMs), perceived efficacy of PLPMs, previous flood experience, awareness of flood risk, ownership, building type, number of flats, building quality, building value, house/flat area, building area, cellar, age, household size, number of children, number of elderly residents, income class, socioeconomic status, and insurance against floods. After a variable selection, descriptors of the hazard, building, and preparedness were deemed significant, namely: water depth, contamination, duration, velocity, building area, building quality, cellar, PLPMs, perceived efficacy of PLPMs, emergency measures, insurance, and previous flood experience. The inclusion of the indicators of preparedness is relevant, as they are rarely involved in loss datasets and in loss modelling, although previous studies have shown their potential in reducing losses. In addition, the linear model fit indicates that the explanatory factors are, in several cases, differently relevant across flood pathways.
Next, Bayesian multilevel models were trained, which intrinsically incorporate uncertainties and allow for partial pooling (i.e. different groups of data, such as households affected by different flood pathways, can learn from each other), increasing the statistical power of the model. A new variable selection was performed for this new model approach, reducing the number of predictors from twelve to seven variables but keeping factors of the hazard, building, and preparedness, namely: water depth, contamination, duration, building area, PLPMs, insurance, and previous flood experience. The new model was trained not only across flood pathways but also across regions of Germany, divided according to general socioeconomic factors and insurance policies, and across flood events. The distinction across regions and flood events did not improve loss modelling and led to a large overlap of regression coefficients, with no clear trend or pattern. The distinction of flood pathways showed credibly distinct regression coefficients, leading to a better understanding of flood loss modelling and indicating one potential reason why model transferability has been challenging.
Finally, new model structures were trained to include the possibility of compound inland floods (i.e. when multiple flood pathways coincide on the same affected asset). The dataset does not allow for verifying in which sequence the flood pathway waves occurred and predictor variables reflect only their mixed or combined outcome. Thus, two Bayesian models were trained: 1. a multi-membership model, a structure which learns the regression coefficients for multiple flood pathways at the same time, and 2. a multilevel model wherein the combination of coinciding flood pathways makes individual categories. The multi-membership model resulted in credibly different coefficients across flood pathways but did not improve model performance in comparison to the model assuming only a single dominant flood pathway. The model with combined categories signals an increase in impacts after compound floods, but due to the uncertainty in model coefficients and estimates, it is not possible to ascertain such an increase as credible. That is, with the current level of uncertainty in differentiating the flood pathways, the loss estimates are not credibly distinct from individual flood pathways.
To overcome the challenges faced, non-linear or mixed models could be explored in the future. Interactions, moderation, and mediation effects, as well as non-linear effects, should also be further studied. Loss data collection should regularly include preparedness indicators, and either data collection or hydraulic modelling should focus on the distinction of coinciding flood pathways, which could inform loss models and further improve estimates. Flood pathways show distinct (financial) impacts, and their inclusion in loss modelling proves relevant, for it helps in clarifying the different contribution of influencing factors to the final loss, improving understanding of the damaging process, and indicating future lines of research.
Flood risk management in Germany follows an integrative approach in which both private households and businesses can make an important contribution to reducing flood damage by implementing property-level adaptation measures. While the flood adaptation behavior of private households has already been widely researched, comparatively less attention has been paid to the adaptation strategies of businesses. However, their ability to cope with flood risk plays an important role in the social and economic development of a flood-prone region. Therefore, using quantitative survey data, this study aims to identify different strategies and adaptation drivers of 557 businesses damaged by a riverine flood in 2013 and 104 businesses damaged by pluvial or flash floods between 2014 and 2017. Our results indicate that a low perceived self-efficacy may be an important factor that can reduce the motivation of businesses to adapt to flood risk. Furthermore, property-owners tended to act more proactively than tenants. In addition, high experience with previous flood events and low perceived response costs could strengthen proactive adaptation behavior. These findings should be considered in business-tailored risk communication.
River floods are among the most devastating natural hazards worldwide. As their generation is highly dependent on climatic conditions, their magnitude and frequency are projected to be affected by future climate change. Therefore, it is crucial to study the ways in which a changing climate will, and already has, influenced flood generation, and thereby flood hazard. Additionally, it is important to understand how other human influences - specifically altered land cover - affect flood hazard at the catchment scale.
The ways in which flood generation is influenced by climatic and land cover conditions differ substantially in different regions. The spatial variability of these effects needs to be taken into account by using consistent datasets across large scales as well as applying methods that can reflect this heterogeneity. Therefore, in the first study of this cumulative thesis a complex network approach is used to find 10 clusters of similar flood behavior among 4390 catchments in the conterminous United States. By using a consistent set of 31 hydro-climatological and land cover variables, and training a separate Random Forest model for each of the clusters, the regional controls on flood magnitude trends between 1960-2010 are detected. It is shown that changes in rainfall are the most important drivers of these trends, while they are regionally controlled by land cover conditions.
While climate change is most commonly associated with flood magnitude trends, it has been shown to also influence flood timing. This can lead to trends in the size of the area across which floods occur simultaneously, the flood synchrony scale. The second study is an analysis of data from 3872 European streamflow gauges and shows that flood synchrony scales have increased in Western Europe and decreased in Eastern Europe. These changes are attributed to changes in flood generation, especially a decreasing relevance of snowmelt. Additionally, the analysis shows that both the absolute values and the trends of flood magnitudes and flood synchrony scales are positively correlated. If these trends persist in the future and are not accounted for, the combined increases of flood magnitudes and flood synchrony scales can exceed the capacities of disaster relief organizations and insurers.
Hazard cascades are an additional way through which climate change can influence different aspects of flood hazard. The 2019/2020 wildfires in Australia, which were preceded by an unprecedented drought and extinguished by extreme rainfall that led to local flooding, present an opportunity to study the effects of multiple preceding hazards on flood hazard. All these hazards are individually affected by climate change, additionally complicating the interactions within the cascade. By estimating and analyzing the burn severity, rainfall magnitude, soil erosion and stream turbidity in differently affected tributaries of the Manning River catchment, the third study shows that even low magnitude floods can pose a substantial hazard within a cascade.
This thesis shows that humanity is affecting flood hazard in multiple ways with spatially and temporarily varying consequences, many of which were previously neglected (e.g. flood synchrony scale, hazard cascades). To allow for informed decision making in risk management and climate change adaptation, it will be crucial to study these aspects across the globe and to project their trajectories into the future. The presented methods can depict the complex interactions of different flood drivers and their spatial variability, providing a basis for the assessment of future flood hazard changes. The role of land cover should be considered more in future flood risk modelling and management studies, while holistic, transferable frameworks for hazard cascade assessment will need to be designed.
Cosmic-ray neutron sensing (CRNS) is a non-invasive tool for measuring hydrogen pools such as soil moisture, snow or vegetation. The intrinsic integration over a radial hectare-scale footprint is a clear advantage for averaging out small-scale heterogeneity, but on the other hand the data may become hard to interpret in complex terrain with patchy land use.
This study presents a directional shielding approach to prevent neutrons from certain angles from being counted while counting neutrons entering the detector from other angles and explores its potential to gain a sharper horizontal view on the surrounding soil moisture distribution.
Using the Monte Carlo code URANOS (Ultra Rapid Neutron-Only Simulation), we modelled the effect of additional polyethylene shields on the horizontal field of view and assessed its impact on the epithermal count rate, propagated uncertainties and aggregation time.
The results demonstrate that directional CRNS measurements are strongly dominated by isotropic neutron transport, which dilutes the signal of the targeted direction especially from the far field. For typical count rates of customary CRNS stations, directional shielding of half-spaces could not lead to acceptable precision at a daily time resolution. However, the mere statistical distinction of two rates should be feasible.
Compound weather events may lead to extreme impacts that can affect many aspects of society including agriculture. Identifying the underlying mechanisms that cause extreme impacts, such as crop failure, is of crucial importance to improve their understanding and forecasting. In this study, we investigate whether key meteorological drivers of extreme impacts can be identified using the least absolute shrinkage and selection operator (LASSO) in a model environment, a method that allows for automated variable selection and is able to handle collinearity between variables. As an example of an extreme impact, we investigate crop failure using annual wheat yield as simulated by the Agricultural Production Systems sIMulator (APSIM) crop model driven by 1600 years of daily weather data from a global climate model (EC-Earth) under present-day conditions for the Northern Hemisphere. We then apply LASSO logistic regression to determine which weather conditions during the growing season lead to crop failure. We obtain good model performance in central Europe and the eastern half of the United States, while crop failure years in regions in Asia and the western half of the United States are less accurately predicted. Model performance correlates strongly with annual mean and variability of crop yields; that is, model performance is highest in regions with relatively large annual crop yield mean and variability. Overall, for nearly all grid points, the inclusion of temperature, precipitation and vapour pressure deficit is key to predict crop failure. In addition, meteorological predictors during all seasons are required for a good prediction. These results illustrate the omnipresence of compounding effects of both meteorological drivers and different periods of the growing season for creating crop failure events. Especially vapour pressure deficit and climate extreme indicators such as diurnal temperature range and the number of frost days are selected by the statistical model as relevant predictors for crop failure at most grid points, underlining their overarching relevance. We conclude that the LASSO regression model is a useful tool to automatically detect compound drivers of extreme impacts and could be applied to other weather impacts such as wildfires or floods. As the detected relationships are of purely correlative nature, more detailed analyses are required to establish the causal structure between drivers and impacts.
The co-occurrence of warm spells and droughts can lead to detrimental socio-economic and ecological impacts, largely surpassing the impacts of either warm spells or droughts alone. We quantify changes in the number of compound warm spells and droughts from 1979 to 2018 in the Mediterranean Basin using the ERA5 data set. We analyse two types of compound events: 1) warm season compound events, which are extreme in absolute terms in the warm season from May to October and 2) year-round deseasonalised compound events, which are extreme in relative terms respective to the time of the year. The number of compound events increases significantly and especially warm spells are increasing strongly – with an annual growth rates of 3.9 (3.5) % for warm season (deseasonalised) compound events and 4.6 (4.4) % for warm spells –, whereas for droughts the change is more ambiguous depending on the applied definition. Therefore, the rise in the number of compound events is primarily driven by temperature changes and not the lack of precipitation. The months July and August show the highest increases in warm season compound events, whereas the highest increases of deseasonalised compound events occur in spring and early summer. This increase in deseasonalised compound events can potentially have a significant impact on the functioning of Mediterranean ecosystems as this is the peak phase of ecosystem productivity and a vital phenophase.
Precipitation forecasting has an important place in everyday life – during the day we may have tens of small talks discussing the likelihood that it will rain this evening or weekend. Should you take an umbrella for a walk? Or should you invite your friends for a barbecue? It will certainly depend on what your weather application shows.
While for years people were guided by the precipitation forecasts issued for a particular region or city several times a day, the widespread availability of weather radars allowed us to obtain forecasts at much higher spatiotemporal resolution of minutes in time and hundreds of meters in space. Hence, radar-based precipitation nowcasting, that is, very-short-range forecasting (typically up to 1–3 h), has become an essential technique, also in various professional application contexts, e.g., early warning, sewage control, or agriculture.
There are two major components comprising a system for precipitation nowcasting: radar-based precipitation estimates, and models to extrapolate that precipitation to the imminent future. While acknowledging the fundamental importance of radar-based precipitation retrieval for precipitation nowcasts, this thesis focuses only on the model development: the establishment of open and competitive benchmark models, the investigation of the potential of deep learning, and the development of procedures for nowcast errors diagnosis and isolation that can guide model development.
The present landscape of computational models for precipitation nowcasting still struggles with the availability of open software implementations that could serve as benchmarks for measuring progress. Focusing on this gap, we have developed and extensively benchmarked a stack of models based on different optical flow algorithms for the tracking step and a set of parsimonious extrapolation procedures based on image warping and advection. We demonstrate that these models provide skillful predictions comparable with or even superior to state-of-the-art operational software. We distribute the corresponding set of models as a software library, rainymotion, which is written in the Python programming language and openly available at GitHub (https://github.com/hydrogo/rainymotion). That way, the library acts as a tool for providing fast, open, and transparent solutions that could serve as a benchmark for further model development and hypothesis testing.
One of the promising directions for model development is to challenge the potential of deep learning – a subfield of machine learning that refers to artificial neural networks with deep architectures, which may consist of many computational layers. Deep learning showed promising results in many fields of computer science, such as image and speech recognition, or natural language processing, where it started to dramatically outperform reference methods.
The high benefit of using "big data" for training is among the main reasons for that. Hence, the emerging interest in deep learning in atmospheric sciences is also caused and concerted with the increasing availability of data – both observational and model-based. The large archives of weather radar data provide a solid basis for investigation of deep learning potential in precipitation nowcasting: one year of national 5-min composites for Germany comprises around 85 billion data points.
To this aim, we present RainNet, a deep convolutional neural network for radar-based precipitation nowcasting. RainNet was trained to predict continuous precipitation intensities at a lead time of 5 min, using several years of quality-controlled weather radar composites provided by the German Weather Service (DWD). That data set covers Germany with a spatial domain of 900 km x 900 km and has a resolution of 1 km in space and 5 min in time. Independent verification experiments were carried out on 11 summer precipitation events from 2016 to 2017. In these experiments, RainNet was applied recursively in order to achieve lead times of up to 1 h. In the verification experiments, trivial Eulerian persistence and a conventional model based on optical flow served as benchmarks. The latter is available in the previously developed rainymotion library.
RainNet significantly outperformed the benchmark models at all lead times up to 60 min for the routine verification metrics mean absolute error (MAE) and critical success index (CSI) at intensity thresholds of 0.125, 1, and 5 mm/h. However, rainymotion turned out to be superior in predicting the exceedance of higher intensity thresholds (here 10 and 15 mm/h). The limited ability of RainNet to predict high rainfall intensities is an undesirable property which we attribute to a high level of spatial smoothing introduced by the model. At a lead time of 5 min, an analysis of power spectral density confirmed a significant loss of spectral power at length scales of 16 km and below.
Obviously, RainNet had learned an optimal level of smoothing to produce a nowcast at 5 min lead time. In that sense, the loss of spectral power at small scales is informative, too, as it reflects the limits of predictability as a function of spatial scale. Beyond the lead time of 5 min, however, the increasing level of smoothing is a mere artifact – an analogue to numerical diffusion – that is not a property of RainNet itself but of its recursive application. In the context of early warning, the smoothing is particularly unfavorable since pronounced features of intense precipitation tend to get lost over longer lead times. Hence, we propose several options to address this issue in prospective research on model development for precipitation nowcasting, including an adjustment of the loss function for model training, model training for longer lead times, and the prediction of threshold exceedance.
The model development together with the verification experiments for both conventional and deep learning model predictions also revealed the need to better understand the source of forecast errors. Understanding the dominant sources of error in specific situations should help in guiding further model improvement. The total error of a precipitation nowcast consists of an error in the predicted location of a precipitation feature and an error in the change of precipitation intensity over lead time. So far, verification measures did not allow to isolate the location error, making it difficult to specifically improve nowcast models with regard to location prediction.
To fill this gap, we introduced a framework to directly quantify the location error. To that end, we detect and track scale-invariant precipitation features (corners) in radar images. We then consider these observed tracks as the true reference in order to evaluate the performance (or, inversely, the error) of any model that aims to predict the future location of a precipitation feature. Hence, the location error of a forecast at any lead time ahead of the forecast time corresponds to the Euclidean distance between the observed and the predicted feature location at the corresponding lead time.
Based on this framework, we carried out a benchmarking case study using one year worth of weather radar composites of the DWD. We evaluated the performance of four extrapolation models, two of which are based on the linear extrapolation of corner motion; and the remaining two are based on the Dense Inverse Search (DIS) method: motion vectors obtained from DIS are used to predict feature locations by linear and Semi-Lagrangian extrapolation.
For all competing models, the mean location error exceeds a distance of 5 km after 60 min, and 10 km after 110 min. At least 25% of all forecasts exceed an error of 5 km after 50 min, and of 10 km after 90 min. Even for the best models in our experiment, at least 5 percent of the forecasts will have a location error of more than 10 km after 45 min. When we relate such errors to application scenarios that are typically suggested for precipitation nowcasting, e.g., early warning, it becomes obvious that location errors matter: the order of magnitude of these errors is about the same as the typical extent of a convective cell. Hence, the uncertainty of precipitation nowcasts at such length scales – just as a result of locational errors – can be substantial already at lead times of less than 1 h. Being able to quantify the location error should hence guide any model development that is targeted towards its minimization. To that aim, we also consider the high potential of using deep learning architectures specific to the assimilation of sequential (track) data.
Last but not least, the thesis demonstrates the benefits of a general movement towards open science for model development in the field of precipitation nowcasting. All the presented models and frameworks are distributed as open repositories, thus enhancing transparency and reproducibility of the methodological approach. Furthermore, they are readily available to be used for further research studies, as well as for practical applications.
Mediterranean ecosystems are particularly vulnerable to climate change and the associated increase in climate anomalies. This study investigates extreme ecosystem responses evoked by climatic drivers in the Mediterranean Basin for the time span 1999–2019 with a specific focus on seasonal variations as the seasonal timing of climatic anomalies is considered essential for impact and vulnerability assessment. A bivariate vulnerability analysis is performed for each month of the year to quantify which combinations of the drivers temperature (obtained from ERA5-Land) and soil moisture (obtained from ESA CCI and ERA5-Land) lead to extreme reductions in ecosystem productivity using the fraction of absorbed photosynthetically active radiation (FAPAR; obtained from the Copernicus Global Land Service) as a proxy.
The bivariate analysis clearly showed that, in many cases, it is not just one but a combination of both drivers that causes ecosystem vulnerability. The overall pattern shows that Mediterranean ecosystems are prone to three soil moisture regimes during the yearly cycle: they are vulnerable to hot and dry conditions from May to July, to cold and dry conditions from August to October, and to cold conditions from November to April, illustrating the shift from a soil-moisture-limited regime in summer to an energy-limited regime in winter. In late spring, a month with significant vulnerability to hot conditions only often precedes the next stage of vulnerability to both hot and dry conditions, suggesting that high temperatures lead to critically low soil moisture levels with a certain time lag. In the eastern Mediterranean, the period of vulnerability to hot and dry conditions within the year is much longer than in the western Mediterranean. Our results show that it is crucial to account for both spatial and temporal variability to adequately assess ecosystem vulnerability. The seasonal vulnerability approach presented in this study helps to provide detailed insights regarding the specific phenological stage of the year in which ecosystem vulnerability to a certain climatic condition occurs.
How to cite.
Vogel, J., Paton, E., and Aich, V.: Seasonal ecosystem vulnerability to climatic anomalies in the Mediterranean, Biogeosciences, 18, 5903–5927, https://doi.org/10.5194/bg-18-5903-2021, 2021.
Spatiotemporal variations of key air pollutants and greenhouse gases in the Himalayan foothills
(2021)
South Asia is a rapidly developing, densely populated and highly polluted region that is facing the impacts of increasing air pollution and climate change, and yet it remains one of the least studied regions of the world scientifically. In recognition of this situation, this thesis focuses on studying (i) the spatial and temporal variation of key greenhouse gases (CO2 and CH4) and air pollutants (CO and O3) and (ii) the vertical distribution of air pollutants (PM, BC) in the foothills of the Himalaya. Five sites were selected in the Kathmandu Valley, the capital region of Nepal, along with two sites outside of the valley in the Makawanpur and Kaski districts, and conducted measurements during the period of 2013-2014 and 2016. These measurements are analyzed in this thesis.
The CO measurements at multiple sites in the Kathmandu Valley showed a clear diurnal cycle: morning and evening levels were high, with an afternoon dip. There are slight differences in the diurnal cycles of CO2 and CH4, with the CO2 and CH4 mixing ratios increasing after the afternoon dip, until the morning peak the next day. The mixing layer height (MLH) of the nocturnal stable layer is relatively constant (~ 200 m) during the night, after which it transitions to a convective mixing layer during the day and the MLH increases up to 1200 m in the afternoon. Pollutants are thus largely trapped in the valley from the evening until sunrise the following day, and the concentration of pollutants increases due to emissions during the night. During afternoon, the pollutants are diluted due to the circulation by the valley winds after the break-up of the mixing layer. The major emission sources of GHGs and air pollutants in the valley are transport sector, residential cooking, brick kilns, trash burning, and agro-residue burning. Brick industries are influential in the winter and pre-monsoon season. The contribution of regional forest fires and agro-residue burning are seen during the pre-monsoon season. In addition, relatively higher CO values were also observed at the valley outskirts (Bhimdhunga and Naikhandi), which indicates the contribution of regional emission sources. This was also supported by the presence of higher concentrations of O3 during the pre-monsoon season.
The mixing ratios of CO2 (419.3 ±6.0 ppm) and CH4 (2.192 ±0.066 ppm) in the valley were much higher than at background sites, including the Mauna Loa observatory (CO2: 396.8 ± 2.0 ppm, CH4:1.831 ± 0.110 ppm) and Waligaun (CO2: 397.7 ± 3.6 ppm, CH4: 1.879 ± 0.009 ppm), China, as well as at an urban site Shadnagar (CH4: 1.92 ± 0.07 ppm) in India.
The daily 8 hour maximum O3 average in the Kathmandu Valley exceeds the WHO recommended value during more than 80% of the days during the pre-monsoon period, which represents a significant risk for human health and ecosystems in the region. Moreover, in the measurements of the vertical distribution of particulate matter, which were made using an ultralight aircraft, and are the first of their kind in the region, an elevated polluted layer at around ca. 3000 m asl. was detected over the Pokhara Valley. The layer could be associated with the large-scale regional transport of pollution. These contributions towards understanding the distributions of key air pollutants and their main sources will provide helpful information for developing management plans and policies to help reduce the risks for the millions of people living in the region.
River flooding poses a threat to numerous cities and communities all over the world. The detection, quantification and attribution of changes in flood characteristics is key to assess changes in flood hazard and help affected societies to timely mitigate and adapt to emerging risks. The Rhine River is one of the major European rivers and numerous large cities reside at its shores. Runoff from several large tributaries superimposes in the main channel shaping the complex from regime. Rainfall, snowmelt as well as ice-melt are important runoff components. The main objective of this thesis is the investigation of a possible transient merging of nival and pluvial Rhine flood regimes under global warming. Rising temperatures cause snowmelt to occur earlier in the year and rainfall to be more intense. The superposition of snowmelt-induced floods originating from the Alps with more intense rainfall-induced runoff from pluvial-type tributaries might create a new flood type with potentially disastrous consequences.
To introduce the topic of changing hydrological flow regimes, an interactive web application that enables the investigation of runoff timing and runoff season- ality observed at river gauges all over the world is presented. The exploration and comparison of a great diversity of river gauges in the Rhine River Basin and beyond indicates that river systems around the world undergo fundamental changes. In hazard and risk research, the provision of background as well as real-time information to residents and decision-makers in an easy accessible way is of great importance. Future studies need to further harness the potential of scientifically engineered online tools to improve the communication of information related to hazards and risks.
A next step is the development of a cascading sequence of analytical tools to investigate long-term changes in hydro-climatic time series. The combination of quantile sampling with moving average trend statistics and empirical mode decomposition allows for the extraction of high resolution signals and the identification of mechanisms driving changes in river runoff. Results point out that the construction and operation of large reservoirs in the Alps is an important factor redistributing runoff from summer to winter and hint at more (intense) rainfall in recent decades, particularly during winter, in turn increasing high runoff quantiles. The development and application of the analytical sequence represents a further step in the scientific quest to disentangling natural variability, climate change signals and direct human impacts.
The in-depth analysis of in situ snow measurements and the simulations of the Alpine snow cover using a physically-based snow model enable the quantification of changes in snowmelt in the sub-basin upstream gauge Basel. Results confirm previous investigations indicating that rising temperatures result in a decrease in maximum melt rates. Extending these findings to a catchment perspective, a threefold effect of rising temperatures can be identified: snowmelt becomes weaker, occurs earlier and forms at higher elevations. Furthermore, results indicate that due to the wide range of elevations in the basin, snowmelt does not occur simultaneously at all elevation, but elevation bands melt together in blocks. The beginning and end of the release of meltwater seem to be determined by the passage of warm air masses, and the respective elevation range affected by accompanying temperatures and snow availability. Following those findings, a hypothesis describing elevation-dependent compensation effects in snowmelt is introduced: In a warmer world with similar sequences of weather conditions, snowmelt is moved upward to higher elevations, i.e., the block of elevation bands providing most water to the snowmelt-induced runoff is located at higher elevations. The movement upward the elevation range makes snowmelt in individual elevation bands occur earlier. The timing of the snowmelt-induced runoff, however, stays the same. Meltwater from higher elevations, at least partly, replaces meltwater from elevations below.
The insights on past and present changes in river runoff, snow covers and underlying mechanisms form the basis of investigations of potential future changes in Rhine River runoff. The mesoscale Hydrological Model (mHM) forced with an ensemble of climate projection scenarios is used to analyse future changes in streamflow, snowmelt, precipitation and evapotranspiration at 1.5, 2.0 and
3.0 ◦ C global warming. Simulation results suggest that future changes in flood characteristics in the Rhine River Basin are controlled by increased precipitation amounts on the one hand, and reduced snowmelt on the other hand. Rising temperatures deplete seasonal snowpacks. At no time during the year, a warming climate results in an increase in the risk of snowmelt-driven flooding. Counterbalancing effects between snowmelt and precipitation often result in only little and transient changes in streamflow peaks. Although, investigations point at changes in both rainfall and snowmelt-driven runoff, there are no indications of a transient merging of nival and pluvial Rhine flood regimes due to climate warming. Flooding in the main tributaries of the Rhine, such as the Moselle River, as well as the High Rhine is controlled by both precipitation and snowmelt. Caution has to be exercised labelling sub-basins such as the Moselle catchment as purely pluvial-type or the Rhine River Basin at Basel as purely nival-type. Results indicate that this (over-) simplifications can entail misleading assumptions with regard to flood-generating mechanisms and changes in flood hazard. In the framework of this thesis, some progress has been made in detecting, quantifying and attributing past, present and future changes in Rhine flow/flood characteristics. However, further studies are necessary to pin down future changes in the flood genesis of Rhine floods, particularly very rare events.
Floodplains are threatened ecosystems and are not only ecologically meaningful but also important for humans by creating multiple benefits. Many underlying functions, like nutrient retention, carbon sequestration or water regulation, strongly depend on regular inundation. So far, these are approached on the basis of what are called ‘active floodplains’. Active floodplains, defined as statistically inundated once every 100 years, represent less than 10% of a floodplain’s original size. Still, should this remaining area be considered as one homogenous surface in terms of floodplain function, or are there any alternative approaches to quantify ecologically active floodplains? With the European Flood Hazard Maps, the extent of not only medium floods (T-medium) but also frequent floods (T-frequent) needs to be modelled by all member states of the European Union. For large German rivers, both scenarios were compared to quantify the extent, as well as selected indicators for naturalness derived from inundation. It is assumed that the more naturalness there is, the more inundation and the better the functioning. Real inundation was quantified using measured discharges from relevant gauges over the past 20 years. As a result, land uses indicating strong human impacts changed significantly from T-frequent to T-medium floodplains. Furthermore, the extent, water depth and water volume stored in the T-frequent and T-medium floodplains is significantly different. Even T-frequent floodplains experienced inundation for only half of the considered gauges during the past 20 years. This study gives evidence for considering regulation functions on the basis of ecologically active floodplains, meaning in floodplains with more frequent inundation that T-medium floodplains delineate.
The spread of antibiotic-resistant bacteria poses a globally increasing threat to public health care. The excessive use of antibiotics in animal husbandry can develop resistances in the stables. Transmission through direct contact with animals and contamination of food has already been proven. The excrements of the animals combined with a binding material enable a further potential path of spread into the environment, if they are used as organic manure in agricultural landscapes. As most of the airborne bacteria are attached to particulate matter, the focus of the work will be the atmospheric dispersal via the dust fraction.
Field measurements on arable lands in Brandenburg, Germany and wind erosion studies in a wind tunnel were conducted to investigate the risk of a potential atmospheric dust-associated spread of antibiotic-resistant bacteria from poultry manure fertilized agricultural soils. The focus was to (i) characterize the conditions for aerosolization and (ii) qualify and quantify dust emissions during agricultural operations and wind erosion.
PM10 (PM, particulate matter with an aerodynamic diameter smaller than 10 µm) emission factors and bacterial fluxes for poultry manure application and incorporation have not been previously reported before. The contribution to dust emissions depends on the water content of the manure, which is affected by the manure pretreatment (fresh, composted, stored, dried), as well as by the intensity of manure spreading from the manure spreader. During poultry manure application, PM10 emission ranged between 0.05 kg ha-1 and 8.37 kg ha-1. For comparison, the subsequent land preparation contributes to 0.35 – 1.15 kg ha-1 of PM10 emissions. Manure particles were still part of dust emissions but they were accounted to be less than 1% of total PM10 emissions due to the dilution of poultry manure in the soil after manure incorporation. Bacterial emissions of fecal origin were more relevant during manure application than during the subsequent manure incorporation, although PM10 emissions of manure incorporation were larger than PM10 emissions of manure application for the non-dried manure variants.
Wind erosion leads to preferred detachment of manure particles from sandy soils, when poultry manure has been recently incorporated. Sorting effects were determined between the low-density organic particles of manure origin and the soil particles of mineral origin close above the threshold of 7 m s-1. In dependence to the wind speed, potential erosion rates between 101 and 854 kg ha-1 were identified, if 6 t ha-1 of poultry manure were applied. Microbial investigation showed that manure bacteria got detached more easily from the soil surface during wind erosion, due to their attachment on manure particles.
Although antibiotic-resistant bacteria (ESBL-producing E. coli) were still found in the poultry barns, no further contamination could be detected with them in the manure, fertilized soils or in the dust generated by manure application, land preparation or wind erosion. Parallel studies of this project showed that storage of poultry manure for a few days (36 – 72 h) is sufficient to inactivate ESBL-producing E. coli. Further antibiotic-resistant bacteria, i.e. MRSA and VRE, were only found sporadically in the stables and not at all in the dust. Therefore, based on the results of this work, the risk of a potential infection by dust-associated antibiotic-resistant bacteria can be considered as low.
The presence of impermeable surfaces in urban areas hinders natural drainage and directs the surface runoff to storm drainage systems with finite capacity, which makes these areas prone to pluvial flooding. The occurrence of pluvial flooding depends on the existence of minimal areas for surface runoff generation and concentration. Detailed hydrologic and hydrodynamic simulations are computationally expensive and require intensive resources. This study compared and evaluated the performance of two simplified methods to identify urban pluvial flood-prone areas, namely the fill–spill–merge (FSM) method and the topographic wetness index (TWI) method and used the TELEMAC-2D hydrodynamic numerical model for benchmarking and validation. The FSM method uses common GIS operations to identify flood-prone depressions from a high-resolution digital elevation model (DEM). The TWI method employs the maximum likelihood method (MLE) to probabilistically calibrate a TWI threshold (τ) based on the inundation maps from a 2D hydrodynamic model for a given spatial window (W) within the urban area. We found that the FSM method clearly outperforms the TWI method both conceptually and effectively in terms of model performance.
Over the past decades, natural hazards, many of which are aggravated by climate change and reveal an increasing trend in frequency and intensity, have caused significant human and economic losses and pose a considerable obstacle to sustainable development. Hence, dedicated action toward disaster risk reduction is needed to understand the underlying drivers and create efficient risk mitigation plans. Such action is requested by the Sendai Framework for Disaster Risk Reduction 2015-2030 (SFDRR), a global agreement launched in 2015 that establishes stating priorities for action, e.g. an improved understanding of disaster risk. Turkey is one of the SFDRR contracting countries and has been severely affected by many natural hazards, in particular earthquakes and floods. However, disproportionately little is known about flood hazards and risks in Turkey. Therefore, this thesis aims to carry out a comprehensive analysis of flood hazards for the first time in Turkey from triggering drivers to impacts. It is intended to contribute to a better understanding of flood risks, improvements of flood risk mitigation and the facilitated monitoring of progress and achievements while implementing the SFDRR.
In order to investigate the occurrence and severity of flooding in comparison to other natural hazards in Turkey and provide an overview of the temporal and spatial distribution of flood losses, the Turkey Disaster Database (TABB) was examined for the years 1960-2014. The TABB database was reviewed through comparison with the Emergency Events Database (EM-DAT), the Dartmouth Flood Observatory database, the scientific literature and news archives. In addition, data on the most severe flood events between 1960 and 2014 were retrieved. These served as a basis for analyzing triggering mechanisms (i.e. atmospheric circulation and precipitation amounts) and aggravating pathways (i.e. topographic features, catchment size, land use types and soil properties). For this, a new approach was developed and the events were classified using hierarchical cluster analyses to identify the main influencing factor per event and provide additional information about the dominant flood pathways for severe floods. The main idea of the study was to start with the event impacts based on a bottom-up approach and identify the causes that created damaging events, instead of applying a model chain with long-term series as input and searching for potentially impacting events as model outcomes. However, within the frequency analysis of the flood-triggering circulation pattern types, it was discovered that events in terms of heavy precipitation were not included in the list of most severe floods, i.e. their impacts were not recorded in national and international loss databases but were mentioned in news archives and reported by the Turkish State Meteorological Service. This finding challenges bottom-up modelling approaches and underlines the urgent need for consistent event and loss documentation. Therefore, as a next step, the aim was to enhance the flood loss documentation by calibrating, validating and applying the United Nations Office for Disaster Risk Reduction (UNDRR) loss estimation method for the recent severe flood events (2015-2020). This provided, a consistent flood loss estimation model for Turkey, allowing governments to estimate losses as quickly as possible after events, e.g. to better coordinate financial aid.
This thesis reveals that, after earthquakes, floods have the second most destructive effects in Turkey in terms of human and economic impacts, with over 800 fatalities and US$ 885.7 million in economic losses between 1960 and 2020, and that more attention should be paid on the national scale. The clustering results of the dominant flood-producing mechanisms (e.g. circulation pattern types, extreme rainfall, sudden snowmelt) present crucial information regarding the source and pathway identification, which can be used as base information for hazard identification in the preliminary risk assessment process. The implementation of the UNDRR loss estimation model shows that the model with country-specific parameters, calibrated damage ratios and sufficient event documentation (i.e. physically damaged units) can be recommended in order to provide first estimates of the magnitude of direct economic losses, even shortly after events have occurred, since it performed well when estimates were compared to documented losses.
The presented results can contribute to improving the national disaster loss database in Turkey and thus enable a better monitoring of the national progress and achievements with regard to the targets stated by the SFDRR. In addition, the outcomes can be used to better characterize and classify flood events. Information on the main underlying factors and aggravating flood pathways further supports the selection of suitable risk reduction policies.
All input variables used in this thesis were obtained from publicly available data. The results are openly accessible and can be used for further research.
As an overall conclusion, it can be stated that consistent loss data collection and better event documentation should gain more attention for a reliable monitoring of the implementation of the SFDRR. Better event documentation should be established according to a globally accepted standard for disaster classification and loss estimation in Turkey. Ultimately, this enables stakeholders to create better risk mitigation actions based on clear hazard definitions, flood event classification and consistent loss estimations.
Models for the predictions of monetary losses from floods mainly blend data deemed to represent a single flood type and region. Moreover, these approaches largely ignore indicators of preparedness and how predictors may vary between regions and events, challenging the transferability of flood loss models. We use a flood loss database of 1812 German flood-affected households to explore how Bayesian multilevel models can estimate normalised flood damage stratified by event, region, or flood process type. Multilevel models acknowledge natural groups in the data and allow each group to learn from others. We obtain posterior estimates that differ between flood types, with credibly varying influences of water depth, contamination, duration, implementation of property-level precautionary measures, insurance, and previous flood experience; these influences overlap across most events or regions, however. We infer that the underlying damaging processes of distinct flood types deserve further attention. Each reported flood loss and affected region involved mixed flood types, likely explaining the uncertainty in the coefficients. Our results emphasise the need to consider flood types as an important step towards applying flood loss models elsewhere. We argue that failing to do so may unduly generalise the model and systematically bias loss estimations from empirical data.