### Refine

#### Document Type

- Article (6)
- Postprint (3)
- Doctoral Thesis (1)
- Report (1)

#### Keywords

- Germany (3)
- Bayesian networks (2)
- Costs (2)
- Damage (2)
- Disasters (2)
- Event (2)
- Hazards (2)
- Inoperability (2)
- June 2013 (2)
- Losses (2)

Im Graduiertenkolleg NatRiskChange der Universität Potsdam und anderen Forschungseinrichtungen werden beobachtete sowie zukünftig mögliche Veränderungen von Naturgefahren untersucht. Teil des strukturierten Doktorandenprogramms sind sogenannte Task-Force-Einsätze, bei denen die Promovierende zeitlich begrenzt ein aktuelles Ereignis auswerten. Im Zuge dieser Aktivität wurde die Sturzflut vom 29.05.2016 in Braunsbach (Baden-Württemberg) untersucht.
In diesem Bericht werden erste Auswertungen zur Einordnung der Niederschläge, zu den hydrologischen und geomorphologischen Prozessen im Einzugsgebiet des Orlacher Bachs sowie zu den verursachten Schäden beleuchtet.
Die Region war Zentrum extremer Regenfälle in der Größenordnung von 100 mm innerhalb von 2 Stunden. Das 6 km² kleine Einzugsgebiet hat eine sehr schnelle Reaktionszeit, zumal bei vorgesättigtem Boden. Im steilen Bachtal haben mehrere kleinere und größere Hangrutschungen über 8000 m³ Geröll, Schutt und Schwemmholz in das Gewässer eingetragen und möglicherweise kurzzeitige Aufstauungen und Durchbrüche verursacht. Neben den großen Wassermengen mit einer Abflussspitze in einer Größenordnung von 100 m³/s hat gerade die Geschiebefracht zu großen Schäden an den Gebäuden entlang des Bachlaufs in Braunsbach geführt.

Understanding and quantifying total economic impacts of flood events is essential for flood risk management and adaptation planning. Yet, detailed estimations of joint direct and indirect flood-induced economic impacts are rare. In this study an innovative modeling procedure for the joint assessment of short-term direct and indirect economic flood impacts is introduced. The procedure is applied to 19 economic sectors in eight federal states of Germany after the flood events in 2013. The assessment of the direct economic impacts is object-based and considers uncertainties associated with the hazard, the exposed objects and their vulnerability. The direct economic impacts are then coupled to a supply-side Input-Output-Model to estimate the indirect economic impacts. The procedure provides distributions of direct and indirect economic impacts which capture the associated uncertainties. The distributions of the direct economic impacts in the federal states are plausible when compared to reported values. The ratio between indirect and direct economic impacts shows that the sectors Manufacturing, Financial and Insurance activities suffered the most from indirect economic impacts. These ratios also indicate that indirect economic impacts can be almost as high as direct economic impacts. They differ strongly between the economic sectors indicating that the application of a single factor as a proxy for the indirect impacts of all economic sectors is not appropriate.

Hydrometeorological hazards caused losses of approximately 110 billion U.S. Dollars in 2016 worldwide. Current damage estimations do not consider the uncertainties in a comprehensive way, and they are not consistent between spatial scales. Aggregated land use data are used at larger spatial scales, although detailed exposure data at the object level, such as openstreetmap.org, is becoming increasingly available across the globe.We present a probabilistic approach for object-based damage estimation which represents uncertainties and is fully scalable in space. The approach is applied and validated to company damage from the flood of 2013 in Germany. Damage estimates are more accurate compared to damage models using land use data, and the estimation works reliably at all spatial scales. Therefore, it can as well be used for pre-event analysis and risk assessments. This method takes hydrometeorological damage estimation and risk assessments to the next level, making damage estimates and their uncertainties fully scalable in space, from object to country level, and enabling the exploitation of new exposure data.

Flash floods are caused by intense rainfall events and represent an insufficiently understood phenomenon in Germany. As a result of higher precipitation intensities, flash floods might occur more frequently in future. In combination with changing land use patterns and urbanisation, damage mitigation, insurance and risk management in flash-flood-prone regions are becoming increasingly important. However, a better understanding of damage caused by flash floods requires ex post collection of relevant but yet sparsely available information for research. At the end of May 2016, very high and concentrated rainfall intensities led to severe flash floods in several southern German municipalities. The small town of Braunsbach stood as a prime example of the devastating potential of such events. Eight to ten days after the flash flood event, damage assessment and data collection were conducted in Braunsbach by investigating all affected buildings and their surroundings. To record and store the data on site, the open-source software bundle KoBoCollect was used as an efficient and easy way to gather information. Since the damage driving factors of flash floods are expected to differ from those of riverine flooding, a post-hoc data analysis was performed, aiming to identify the influence of flood processes and building attributes on damage grades, which reflect the extent of structural damage. Data analyses include the application of random forest, a random general linear model and multinomial logistic regression as well as the construction of a local impact map to reveal influences on the damage grades. Further, a Spearman's Rho correlation matrix was calculated. The results reveal that the damage driving factors of flash floods differ from those of riverine floods to a certain extent. The exposition of a building in flow direction shows an especially strong correlation with the damage grade and has a high predictive power within the constructed damage models. Additionally, the results suggest that building materials as well as various building aspects, such as the existence of a shop window and the surroundings, might have an effect on the resulting damage. To verify and confirm the outcomes as well as to support future mitigation strategies, risk management and planning, more comprehensive and systematic data collection is necessary.

Reliable flood risk analyses, including the estimation of damage, are an important prerequisite for efficient risk management. However, not much is known about flood damage processes affecting companies. Thus, we conduct a flood damage assessment of companies in Germany with regard to two aspects. First, we identify relevant damage-influencing variables. Second, we assess the prediction performance of the developed damage models with respect to the gain by using an increasing amount of training data and a sector-specific evaluation of the data. Random forests are trained with data from two postevent surveys after flood events occurring in the years 2002 and 2013. For a sector-specific consideration, the data set is split into four subsets corresponding to the manufacturing, commercial, financial, and service sectors. Further, separate models are derived for three different company assets: buildings, equipment, and goods and stock. Calculated variable importance values reveal different variable sets relevant for the damage estimation, indicating significant differences in the damage process for various company sectors and assets. With an increasing number of data used to build the models, prediction errors decrease. Yet the effect is rather small and seems to saturate for a data set size of several hundred observations. In contrast, the prediction improvement achieved by a sector-specific consideration is more distinct, especially for damage to equipment and goods and stock. Consequently, sector-specific data acquisition and a consideration of sector-specific company characteristics in future flood damage assessments is expected to improve the model performance more than a mere increase in data.

We investigate the usefulness of complex flood damage models for predicting relative damage to residential buildings in a spatial and temporal transfer context. We apply eight different flood damage models to predict relative building damage for five historic flood events in two different regions of Germany. Model complexity is measured in terms of the number of explanatory variables which varies from 1 variable up to 10 variables which are singled out from 28 candidate variables. Model validation is based on empirical damage data, whereas observation uncertainty is taken into consideration. The comparison of model predictive performance shows that additional explanatory variables besides the water depth improve the predictive capability in a spatial and temporal transfer context, i.e., when the models are transferred to different regions and different flood events. Concerning the trade-off between predictive capability and reliability the model structure seem more important than the number of explanatory variables. Among the models considered, the reliability of Bayesian network-based predictions in space-time transfer is larger than for the remaining models, and the uncertainties associated with damage predictions are reflected more completely.

Even though quite different in occurrence and consequences, from a modeling perspective many natural hazards share similar properties and challenges. Their complex nature as well as lacking knowledge about their driving forces and potential effects make their analysis demanding: uncertainty about the modeling framework, inaccurate or incomplete event observations and the intrinsic randomness of the natural phenomenon add up to different interacting layers of uncertainty, which require a careful handling. Nevertheless deterministic approaches are still widely used in natural hazard assessments, holding the risk of underestimating the hazard with disastrous effects. The all-round probabilistic framework of Bayesian networks constitutes an attractive alternative. In contrast to deterministic proceedings, it treats response variables as well as explanatory variables as random variables making no difference between input and output variables. Using a graphical representation Bayesian networks encode the dependency relations between the variables in a directed acyclic graph: variables are represented as nodes and (in-)dependencies between variables as (missing) edges between the nodes. The joint distribution of all variables can thus be described by decomposing it, according to the depicted independences, into a product of local conditional probability distributions, which are defined by the parameters of the Bayesian network. In the framework of this thesis the Bayesian network approach is applied to different natural hazard domains (i.e. seismic hazard, flood damage and landslide assessments). Learning the network structure and parameters from data, Bayesian networks reveal relevant dependency relations between the included variables and help to gain knowledge about the underlying processes. The problem of Bayesian network learning is cast in a Bayesian framework, considering the network structure and parameters as random variables itself and searching for the most likely combination of both, which corresponds to the maximum a posteriori (MAP score) of their joint distribution given the observed data. Although well studied in theory the learning of Bayesian networks based on real-world data is usually not straight forward and requires an adoption of existing algorithms. Typically arising problems are the handling of continuous variables, incomplete observations and the interaction of both. Working with continuous distributions requires assumptions about the allowed families of distributions. To "let the data speak" and avoid wrong assumptions, continuous variables are instead discretized here, thus allowing for a completely data-driven and distribution-free learning. An extension of the MAP score, considering the discretization as random variable as well, is developed for an automatic multivariate discretization, that takes interactions between the variables into account. The discretization process is nested into the network learning and requires several iterations. Having to face incomplete observations on top, this may pose a computational burden. Iterative proceedings for missing value estimation become quickly infeasible. A more efficient albeit approximate method is used instead, estimating the missing values based only on the observations of variables directly interacting with the missing variable. Moreover natural hazard assessments often have a primary interest in a certain target variable. The discretization learned for this variable does not always have the required resolution for a good prediction performance. Finer resolutions for (conditional) continuous distributions are achieved with continuous approximations subsequent to the Bayesian network learning, using kernel density estimations or mixtures of truncated exponential functions. All our proceedings are completely data-driven. We thus avoid assumptions that require expert knowledge and instead provide domain independent solutions, that are applicable not only in other natural hazard assessments, but in a variety of domains struggling with uncertainties.