Refine
Year of publication
Document Type
- Article (9)
- Doctoral Thesis (4)
- Postprint (4)
Is part of the Bibliography
- yes (17)
Keywords
- flood risk (17) (remove)
Technological progress allows for producing ever more complex predictive models on the basis of increasingly big datasets. For risk management of natural hazards, a multitude of models is needed as basis for decision-making, e.g. in the evaluation of observational data, for the prediction of hazard scenarios, or for statistical estimates of expected damage. The question arises, how modern modelling approaches like machine learning or data-mining can be meaningfully deployed in this thematic field. In addition, with respect to data availability and accessibility, the trend is towards open data. Topic of this thesis is therefore to investigate the possibilities and limitations of machine learning and open geospatial data in the field of flood risk modelling in the broad sense. As this overarching topic is broad in scope, individual relevant aspects are identified and inspected in detail.
A prominent data source in the flood context is satellite-based mapping of inundated areas, for example made openly available by the Copernicus service of the European Union. Great expectations are directed towards these products in scientific literature, both for acute support of relief forces during emergency response action, and for modelling via hydrodynamic models or for damage estimation. Therefore, a focus of this work was set on evaluating these flood masks. From the observation that the quality of these products is insufficient in forested and built-up areas, a procedure for subsequent improvement via machine learning was developed. This procedure is based on a classification algorithm that only requires training data from a particular class to be predicted, in this specific case data of flooded areas, but not of the negative class (dry areas). The application for hurricane Harvey in Houston shows the high potential of this method, which depends on the quality of the initial flood mask.
Next, it is investigated how much the predicted statistical risk from a process-based model chain is dependent on implemented physical process details. Thereby it is demonstrated what a risk study based on established models can deliver. Even for fluvial flooding, such model chains are already quite complex, though, and are hardly available for compound or cascading events comprising torrential rainfall, flash floods, and other processes. In the fourth chapter of this thesis it is therefore tested whether machine learning based on comprehensive damage data can offer a more direct path towards damage modelling, that avoids explicit conception of such a model chain. For that purpose, a state-collected dataset of damaged buildings from the severe El Niño event 2017 in Peru is used. In this context, the possibilities of data-mining for extracting process knowledge are explored as well. It can be shown that various openly available geodata sources contain useful information for flood hazard and damage modelling for complex events, e.g. satellite-based rainfall measurements, topographic and hydrographic information, mapped settlement areas, as well as indicators from spectral data. Further, insights on damaging processes are discovered, which mainly are in line with prior expectations. The maximum intensity of rainfall, for example, acts stronger in cities and steep canyons, while the sum of rain was found more informative in low-lying river catchments and forested areas. Rural areas of Peru exhibited higher vulnerability in the presented study compared to urban areas. However, the general limitations of the methods and the dependence on specific datasets and algorithms also become obvious.
In the overarching discussion, the different methods – process-based modelling, predictive machine learning, and data-mining – are evaluated with respect to the overall research questions. In the case of hazard observation it seems that a focus on novel algorithms makes sense for future research. In the subtopic of hazard modelling, especially for river floods, the improvement of physical models and the integration of process-based and statistical procedures is suggested. For damage modelling the large and representative datasets necessary for the broad application of machine learning are still lacking. Therefore, the improvement of the data basis in the field of damage is currently regarded as more important than the selection of algorithms.
After a century of semi-restricted floodplain development, Southern Alberta, Canada, was struck by the devastating 2013 Flood. Aging infrastructure and limited property-level floodproofing likely contributed to the $4-6 billion (CAD) losses. Following this catastrophe, Alberta has seen a revival in flood management, largely focused on structural protections. However, concurrent with the recent structural work was a 100,000+ increase in Calgary's population in the 5 years following the flood, leading to further densification of high-hazard areas. This study implements the novel Stochastic Object-based Flood damage Dynamic Assessment (SOFDA) model framework to quantify the progression of the direct-damage flood risk in a mature urban neighborhood after the 2013 Flood. Five years of remote-sensing data, property assessment records, and inundation simulations following the flood are used to construct the model. Results show that in these 5 years, vulnerability trends (like densification) have increased flood risk by 4%; however, recent structural mitigation projects have reduced overall flood risk by 47% for this case study. These results demonstrate that the flood management revival in Southern Alberta has largely been successful at reducing flood risk; however, the gains are under threat from continued development and densification absent additional floodproofing regulations.
The growing worldwide impact of flood events has motivated the development and application of global flood hazard models (GFHMs). These models have become useful tools for flood risk assessment and management, especially in regions where little local hazard information is available. One of the key uncertainties associated with GFHMs is the estimation of extreme flood magnitudes to generate flood hazard maps. In this study, the 1-in-100 year flood (Q100) magnitude was estimated using flow outputs from four global hydrological models (GHMs) and two global flood frequency analysis datasets for 1350 gauges across the conterminous US. The annual maximum flows of the observed and modelled timeseries of streamflow were bootstrapped to evaluate the sensitivity of the underlying data to extrapolation. Results show that there are clear spatial patterns of bias associated with each method. GHMs show a general tendency to overpredict Western US gauges and underpredict Eastern US gauges. The GloFAS and HYPE models underpredict Q100 by more than 25% in 68% and 52% of gauges, respectively. The PCR-GLOBWB and CaMa-Flood models overestimate Q100 by more than 25% at 60% and 65% of gauges in West and Central US, respectively. The global frequency analysis datasets have spatial variabilities that differ from the GHMs. We found that river basin area and topographic elevation explain some of the spatial variability in predictive performance found in this study. However, there is no single model or method that performs best everywhere, and therefore we recommend a weighted ensemble of predictions of extreme flood magnitudes should be used for large-scale flood hazard assessment.
In the past, floods were basically managed by flood control mechanisms. The focus was set on the reduction of flood hazard. The potential consequences were of minor interest. Nowadays river flooding is increasingly seen from the risk perspective, including possible consequences. Moreover, the large-scale picture of flood risk became increasingly important for disaster management planning, national risk developments and the (re-) insurance industry. Therefore, it is widely accepted that risk-orientated flood management ap-proaches at the basin-scale are needed. However, large-scale flood risk assessment methods for areas of several 10,000 km² are still in early stages. Traditional flood risk assessments are performed reach wise, assuming constant probabilities for the entire reach or basin. This might be helpful on a local basis, but where large-scale patterns are important this approach is of limited use. Assuming a T-year flood (e.g. 100 years) for the entire river network is unrealistic and would lead to an overestimation of flood risk at the large scale. Due to the lack of damage data, additionally, the probability of peak discharge or rainfall is usually used as proxy for damage probability to derive flood risk. With a continuous and long term simulation of the entire flood risk chain, the spatial variability of probabilities could be consider and flood risk could be directly derived from damage data in a consistent way.
The objective of this study is the development and application of a full flood risk chain, appropriate for the large scale and based on long term and continuous simulation. The novel approach of ‘derived flood risk based on continuous simulations’ is introduced, where the synthetic discharge time series is used as input into flood impact models and flood risk is directly derived from the resulting synthetic damage time series.
The bottleneck at this scale is the hydrodynamic simu-lation. To find suitable hydrodynamic approaches for the large-scale a benchmark study with simplified 2D hydrodynamic models was performed. A raster-based approach with inertia formulation and a relatively high resolution of 100 m in combination with a fast 1D channel routing model was chosen.
To investigate the suitability of the continuous simulation of a full flood risk chain for the large scale, all model parts were integrated into a new framework, the Regional Flood Model (RFM). RFM consists of the hydrological model SWIM, a 1D hydrodynamic river network model, a 2D raster based inundation model and the flood loss model FELMOps+r. Subsequently, the model chain was applied to the Elbe catchment, one of the largest catchments in Germany. For the proof-of-concept, a continuous simulation was per-formed for the period of 1990-2003. Results were evaluated / validated as far as possible with available observed data in this period. Although each model part introduced its own uncertainties, results and runtime were generally found to be adequate for the purpose of continuous simulation at the large catchment scale.
Finally, RFM was applied to a meso-scale catchment in the east of Germany to firstly perform a flood risk assessment with the novel approach of ‘derived flood risk assessment based on continuous simulations’. Therefore, RFM was driven by long term synthetic meteorological input data generated by a weather generator. Thereby, a virtual time series of climate data of 100 x 100 years was generated and served as input to RFM providing subsequent 100 x 100 years of spatially consistent river discharge series, inundation patterns and damage values. On this basis, flood risk curves and expected annual damage could be derived directly from damage data, providing a large-scale picture of flood risk. In contrast to traditional flood risk analysis, where homogenous return periods are assumed for the entire basin, the presented approach provides a coherent large-scale picture of flood risk. The spatial variability of occurrence probability is respected. Additionally, data and methods are consistent. Catchment and floodplain processes are repre-sented in a holistic way. Antecedent catchment conditions are implicitly taken into account, as well as physical processes like storage effects, flood attenuation or channel–floodplain interactions and related damage influencing effects. Finally, the simulation of a virtual period of 100 x 100 years and consequently large data set on flood loss events enabled the calculation of flood risk directly from damage distributions. Problems associated with the transfer of probabilities in rainfall or peak runoff to probabilities in damage, as often used in traditional approaches, are bypassed.
RFM and the ‘derived flood risk approach based on continuous simulations’ has the potential to provide flood risk statements for national planning, re-insurance aspects or other questions where spatially consistent, large-scale assessments are required.
The affordability of property-level adaptation measures against flooding is crucial due to the movement toward integrated flood risk management, which requires the individuals threatened by flooding to actively manage flooding. It is surprising to find that affordability is not often discussed, given the important roles that affordability and social justice play regarding flood risk management. This article provides a starting point for investigating the potential rate of unaffordability of flood risk property-level adaptation measures across Europe using two definitions of affordability, which are combined with two different affordability thresholds from within flood risk research. It uses concepts of investment and payment affordability, with affordability thresholds based on residual income and expenditure definitions of unaffordability. These concepts, in turn, are linked with social justice through fairness concerns, in that, all should have equal capability to act, of which affordability is one avenue. In doing so, it was found that, for a large proportion of Europe, property owners generally cannot afford to make one-time payment of the cost of protective measures. These can be made affordable with installment payment mechanisms or similar mechanisms that spread costs over time. Therefore, the movement toward greater obligations for flood-prone residents to actively adapt to flooding should be accompanied by socially accessible financing mechanisms.
A growing focus is being placed on both individuals and communities to adapt to flooding as part of the Sendai Framework for Disaster Risk Reduction 2015-2030. Adaptation to flooding requires sufficient social capital (linkages between members of society), risk perceptions (understanding of risk), and self-efficacy (self-perceived ability to limit disaster impacts) to be effective. However, there is limited understanding of how social capital, risk perceptions, and self-efficacy interact. We seek to explore how social capital interacts with variables known to increase the likelihood of successful adaptation. To study these linkages we analyze survey data of 1010 respondents across two communities in Thua Tien-Hue Province in central Vietnam, using ordered probit models. We find positive correlations between social capital, risk perceptions, and self-efficacy overall. This is a partly contrary finding to what was found in previous studies linking these concepts in Europe, which may be a result from the difference in risk context. The absence of an overall negative exchange between these factors has positive implications for proactive flood risk adaptation.
The intangible impacts of floods on welfare are not well investigated, even though they are important aspects of welfare. Moreover, flooding has gender based impacts on welfare. These differing impacts create a gender based flood risk resilience gap. We study the intangible impacts of flood risk on the subjective well-being of residents in central Vietnam. The measurement of intangible impacts through subjective well-being is a growing field within flood risk research. We find an initial drop in welfare through subjective well-being across genders when a flood is experienced. Male respondents tended to recover their welfare losses by around 80% within 5 years while female respondents were associated with a welfare recovery of around 70%. A monetization of the impacts floods have on an individual’s subjective well-being shows that for the average female respondent, between 41% to 86% of annual income would be required to compensate subjective well-being losses after 5 years of experiencing a flood. The corresponding value for males is 30% to 57% of annual income. This shows that the intangible impacts of flood risk are important (across genders) and need to be integrated into flood (or climate) risk assessments to develop more socially appropriate risk management strategies.
The intangible impacts of floods on welfare are not well investigated, even though they are important aspects of welfare. Moreover, flooding has gender based impacts on welfare. These differing impacts create a gender based flood risk resilience gap. We study the intangible impacts of flood risk on the subjective well-being of residents in central Vietnam. The measurement of intangible impacts through subjective well-being is a growing field within flood risk research. We find an initial drop in welfare through subjective well-being across genders when a flood is experienced. Male respondents tended to recover their welfare losses by around 80% within 5 years while female respondents were associated with a welfare recovery of around 70%. A monetization of the impacts floods have on an individual’s subjective well-being shows that for the average female respondent, between 41% to 86% of annual income would be required to compensate subjective well-being losses after 5 years of experiencing a flood. The corresponding value for males is 30% to 57% of annual income. This shows that the intangible impacts of flood risk are important (across genders) and need to be integrated into flood (or climate) risk assessments to develop more socially appropriate risk management strategies.
There has been much research regarding the perceptions, preferences, behaviour, and responses of people exposed to flooding and other nat- ural hazards. Cross-sectional surveys have been the predominant method applied in such research. While cross-sectional data can provide a snapshot of a respondent’s behaviour and perceptions, it cannot be assumed that the respondent’s perceptions are constant over time. As a result, many important research questions relating to dynamic processes, such as changes in risk perceptions, adaptation behaviour, and resilience cannot be fully addressed by cross-sectional surveys. To overcome these shortcomings, there has been a call for developing longitudinal (or panel) datasets in research on natural hazards, vulnerabilities, and risks. However, experiences with implementing longitudinal surveys in the flood risk domain (FRD), which pose distinct methodological challenges, are largely lacking. The key problems are sample recruitment, attrition rate, and attrition bias. We present a review of the few existing longitudinal surveys in the FRD. In addition, we investigate the potential attrition bias and attrition rates in a panel dataset of flood-affected households in Germany. We find little potential for attrition bias to occur. High attrition rates across longitudinal survey waves are the larger concern. A high attrition rate rapidly depletes the longitudinal sample. To overcome high attrition, longitudinal data should be collected as part of a multisector partnership to allow for sufficient resources to implement sample retention strategies. If flood-specific panels are developed, different sample retention strategies should be applied and evaluated in future research to understand how much-needed longitudinal surveying techniques can be successfully applied to the study of individuals threatened by flooding.
Global flood models (GFMs) are increasingly being used to estimate global-scale societal and economic risks of river flooding. Recent validation studies have highlighted substantial differences in performance between GFMs and between validation sites. However, it has not been systematically quantified to what extent the choice of the underlying climate forcing and global hydrological model (GHM) influence flood model performance. Here, we investigate this sensitivity by comparing simulated flood extent to satellite imagery of past flood events, for an ensemble of three climate reanalyses and 11 GHMs. We study eight historical flood events spread over four continents and various climate zones. For most regions, the simulated inundation extent is relatively insensitive to the choice of GHM. For some events, however, individual GHMs lead to much lower agreement with observations than the others, mostly resulting from an overestimation of inundated areas. Two of the climate forcings show very similar results, while with the third, differences between GHMs become more pronounced. We further show that when flood protection standards are accounted for, many models underestimate flood extent, pointing to deficiencies in their flood frequency distribution. Our study guides future applications of these models, and highlights regions and models where targeted improvements might yield the largest performance gains.