Refine
Year of publication
Document Type
- Postprint (14)
- Article (9)
- Doctoral Thesis (1)
- Master's Thesis (1)
- Review (1)
Keywords
- models (26) (remove)
Institute
- Mathematisch-Naturwissenschaftliche Fakultät (6)
- Institut für Geowissenschaften (5)
- Institut für Physik und Astronomie (5)
- Institut für Chemie (4)
- Institut für Umweltwissenschaften und Geographie (3)
- Institut für Mathematik (2)
- Department Sport- und Gesundheitswissenschaften (1)
- Extern (1)
- Humanwissenschaftliche Fakultät (1)
Floods continue to be the leading cause of economic damages and fatalities among natural disasters worldwide. As future climate and exposure changes are projected to intensify these damages, the need for more accurate and scalable flood risk models is rising. Over the past decade, macro-scale flood risk models have evolved from initial proof-of-concepts to indispensable tools for decision-making at global-, nationaland, increasingly, the local-level. This progress has been propelled by the advent of high-performance computing and the availability of global, space-based datasets. However, despite such advancements, these models are rarely validated and consistently fall short of the accuracy achieved by high-resolution local models. While capabilities have improved, significant gaps persist in understanding the behaviours of such macro-scale models, particularly their tendency to overestimate risk. This dissertation aims to address such gaps by examining the scale transfers inherent in the construction and application of coarse macroscale models. To achieve this, four studies are presented that, collectively, address exposure, hazard, and vulnerability components of risk affected by upscaling or downscaling.
The first study focuses on a type of downscaling where coarse flood hazard inundation grids are enhanced to a finer resolution. While such inundation downscaling has been employed in numerous global model chains, ours is the first study to focus specifically on this component, providing an evaluation of the state of the art and a novel algorithm. Findings demonstrate that our novel algorithm is eight times faster than existing methods, offers a slight improvement in accuracy, and generates more physically coherent flood maps in hydraulically challenging regions. When applied to a case study, the algorithm generated a 4m resolution inundation map from 30m hydrodynamic model outputs in 33 s, a 60-fold improvement in runtime with a 25% increase in RMSE compared with direct hydrodynamic modelling. All evaluated downscaling algorithms yielded better accuracy than the coarse hydrodynamic model when compared to observations, demonstrating similar limits of coarse hydrodynamic models reported by others. The substitution of downscaling into flood risk model chains, in place of high-resolution modelling, can drastically improve the lead time of impactbased forecasts and the efficiency of hazard map production. With downscaling, local regions could obtain high resolution local inundation maps by post-processing a global model without the need for expensive modelling or expertise.
The second study focuses on hazard aggregation and its implications for exposure, investigating implicit aggregations commonly used to intersect hazard grids with coarse exposure models. This research introduces a novel spatial classification framework to understand the effects of rescaling flood hazard grids to a coarser resolution. The study derives closed-form analytical solutions for the location and direction of bias from flood grid aggregation, showing that bias will always be present in regions near the edge of inundation. For example, inundation area will be positively biased when water depth grids are aggregated, while volume will be negatively biased when water elevation grids are aggregated. Extending the analysis to effects of hazard aggregation on building exposure, this study shows that exposure in regions at the edge of inundation are an order of magnitude more sensitive to aggregation errors than hazard alone. Among the two aggregation routines considered, averaging water surface elevation grids better preserved flood depths at buildings than averaging of water depth grids. The study provides the first mathematical proof and generalizeable treatment of flood hazard grid aggregation, demonstrating important mechanisms to help flood risk modellers understand and control model behaviour.
The final two studies focus on the aggregation of vulnerability models or flood damage functions, investigating the practice of applying per-asset functions to aggregate exposure models. Both studies extend Jensen’s inequality, a well-known 1906 mathematical proof, to demonstrate how the aggregation of flood damage functions leads to bias. Applying Jensen’s proof in this new context, results show that typically concave flood damage functions will introduce a positive bias (overestimation) when aggregated. This behaviour was further investigated with a simulation experiment including 2 million buildings in Germany, four global flood hazard simulations and three aggregation scenarios. The results show that positive aggregation bias is not distributed evenly in space, meaning some regions identified as “hot spots of risk” in assessments may in fact just be hot spots of aggregation bias. This study provides the first application of Jensen’s inequality to explain the overestimates reported elsewhere and advice for modellers to minimize such artifacts.
In total, this dissertation investigates the complex ways aggregation and disaggregation influence the behaviour of risk models, focusing on the scale-transfers underpinning macro-scale flood risk assessments. Extending a key finding of the flood hazard literature to the broader context of flood risk, this dissertation concludes that all else equal, coarse models overestimate risk. This dissertation goes beyond previous studies by providing mathematical proofs for how and where such bias emerges in aggregation routines, offering a mechanistic explanation for coarse model overestimates. It shows that this bias is spatially heterogeneous, necessitating a deep understanding of how rescaling may bias models to effectively reduce or communicate uncertainties. Further, the dissertation offers specific recommendations to help modellers minimize scale transfers in problematic regions. In conclusion, I argue that such aggregation errors are epistemic, stemming from choices in model structure, and therefore hold greater potential and impetus for study and mitigation. This deeper understanding of uncertainties is essential for improving macro-scale flood risk models and their effectiveness in equitable, holistic, and sustainable flood management.
This study identified key somatic and demographic characteristics that benefit all swimmers and, at the same time, identified further characteristics that benefit only specific swimming strokes. Three hundred sixty-three competitive-level swimmers (male [n = 202]; female [n = 161]) participated in the study. We adopted a multiplicative, allometric regression model to identify the key characteristics associated with 100 m swimming speeds (controlling for age). The model was refined using backward elimination. Characteristics that benefited some but not all strokes were identified by introducing stroke-by-predictor variable interactions. The regression analysis revealed 7 "common" characteristics that benefited all swimmers suggesting that all swimmers benefit from having less body fat, broad shoulders and hips, a greater arm span (but shorter lower arms) and greater forearm girths with smaller relaxed arm girths. The 4 stroke-specific characteristics reveal that backstroke swimmers benefit from longer backs, a finding that can be likened to boats with longer hulls also travel faster through the water. Other stroke-by-predictor variable interactions (taken together) identified that butterfly swimmers are characterized by greater muscularity in the lower legs. These results highlight the importance of considering somatic and demographic characteristics of young swimmers for talent identification purposes (i.e., to ensure that swimmers realize their most appropriate strokes).
Sprache
Englisch
Modern single-particle-tracking techniques produce extensive time-series of diffusive motion in a wide variety of systems, from single-molecule motion in living-cells to movement ecology. The quest is to decipher the physical mechanisms encoded in the data and thus to better understand the probed systems. We here augment recently proposed machine-learning techniques for decoding anomalous-diffusion data to include an uncertainty estimate in addition to the predicted output. To avoid the Black-Box-Problem a Bayesian-Deep-Learning technique named Stochastic-Weight-Averaging-Gaussian is used to train models for both the classification of the diffusionmodel and the regression of the anomalous diffusion exponent of single-particle-trajectories. Evaluating their performance, we find that these models can achieve a wellcalibrated error estimate while maintaining high prediction accuracies. In the analysis of the output uncertainty predictions we relate these to properties of the underlying diffusion models, thus providing insights into the learning process of the machine and the relevance of the output.
Modern single-particle-tracking techniques produce extensive time-series of diffusive motion in a wide variety of systems, from single-molecule motion in living-cells to movement ecology. The quest is to decipher the physical mechanisms encoded in the data and thus to better understand the probed systems. We here augment recently proposed machine-learning techniques for decoding anomalous-diffusion data to include an uncertainty estimate in addition to the predicted output. To avoid the Black-Box-Problem a Bayesian-Deep-Learning technique named Stochastic-Weight-Averaging-Gaussian is used to train models for both the classification of the diffusionmodel and the regression of the anomalous diffusion exponent of single-particle-trajectories. Evaluating their performance, we find that these models can achieve a wellcalibrated error estimate while maintaining high prediction accuracies. In the analysis of the output uncertainty predictions we relate these to properties of the underlying diffusion models, thus providing insights into the learning process of the machine and the relevance of the output.
Brownian motion and viscoelastic anomalous diffusion in homogeneous environments are intrinsically Gaussian processes. In a growing number of systems, however, non-Gaussian displacement distributions of these processes are being reported. The physical cause of the non-Gaussianity is typically seen in different forms of disorder. These include, for instance, imperfect "ensembles" of tracer particles, the presence of local variations of the tracer mobility in heteroegenous environments, or cases in which the speed or persistence of moving nematodes or cells are distributed. From a theoretical point of view stochastic descriptions based on distributed ("superstatistical") transport coefficients as well as time-dependent generalisations based on stochastic transport parameters with built-in finite correlation time are invoked. After a brief review of the history of Brownian motion and the famed Gaussian displacement distribution, we here provide a brief introduction to the phenomenon of non-Gaussianity and the stochastic modelling in terms of superstatistical and diffusing-diffusivity approaches.
Flood loss modeling is a central component of flood risk analysis. Conventionally, this involves univariable and deterministic stage-damage functions. Recent advancements in the field promote the use of multivariable and probabilistic loss models, which consider variables beyond inundation depth and account for prediction uncertainty. Although companies contribute significantly to total loss figures, novel modeling approaches for companies are lacking. Scarce data and the heterogeneity among companies impede the development of company flood loss models. We present three multivariable flood loss models for companies from the manufacturing, commercial, financial, and service sector that intrinsically quantify prediction uncertainty. Based on object-level loss data (n = 1,306), we comparatively evaluate the predictive capacity of Bayesian networks, Bayesian regression, and random forest in relation to deterministic and probabilistic stage-damage functions, serving as benchmarks. The company loss data stem from four postevent surveys in Germany between 2002 and 2013 and include information on flood intensity, company characteristics, emergency response, private precaution, and resulting loss to building, equipment, and goods and stock. We find that the multivariable probabilistic models successfully identify and reproduce essential relationships of flood damage processes in the data. The assessment of model skill focuses on the precision of the probabilistic predictions and reveals that the candidate models outperform the stage-damage functions, while differences among the proposed models are negligible. Although the combination of multivariable and probabilistic loss estimation improves predictive accuracy over the entire data set, wide predictive distributions stress the necessity for the quantification of uncertainty.
Observations of rift and rifted margin architecture suggest that significant spatial and temporal structural heterogeneity develops during the multiphase evolution of continental rifting. Inheritance is often invoked to explain this heterogeneity, such as preexisting anisotropies in rock composition, rheology, and deformation. Here, we use high-resolution 3-D thermal-mechanical numerical models of continental extension to demonstrate that rift-parallel heterogeneity may develop solely through fault network evolution during the transition from distributed to localized deformation. In our models, the initial phase of distributed normal faulting is seeded through randomized initial strength perturbations in an otherwise laterally homogeneous lithosphere extending at a constant rate. Continued extension localizes deformation onto lithosphere-scale faults, which are laterally offset by tens of km and discontinuous along-strike. These results demonstrate that rift- and margin-parallel heterogeneity of large-scale fault patterns may in-part be a natural byproduct of fault network coalescence.
Observations of rift and rifted margin architecture suggest that significant spatial and temporal structural heterogeneity develops during the multiphase evolution of continental rifting. Inheritance is often invoked to explain this heterogeneity, such as preexisting anisotropies in rock composition, rheology, and deformation. Here, we use high-resolution 3-D thermal-mechanical numerical models of continental extension to demonstrate that rift-parallel heterogeneity may develop solely through fault network evolution during the transition from distributed to localized deformation. In our models, the initial phase of distributed normal faulting is seeded through randomized initial strength perturbations in an otherwise laterally homogeneous lithosphere extending at a constant rate. Continued extension localizes deformation onto lithosphere-scale faults, which are laterally offset by tens of km and discontinuous along-strike. These results demonstrate that rift- and margin-parallel heterogeneity of large-scale fault patterns may in-part be a natural byproduct of fault network coalescence.
The accepted idea that there exists an inherent finite-time barrier in deterministically predicting atmospheric flows originates from Edward N. Lorenz’s 1969 work based on two-dimensional (2D) turbulence. Yet, known analytic results on the 2D Navier–Stokes (N-S) equations suggest that one can skillfully predict the 2D N-S system indefinitely far ahead should the initial-condition error become sufficiently small, thereby presenting a potential conflict with Lorenz’s theory. Aided by numerical simulations, the present work reexamines Lorenz’s model and reviews both sides of the argument, paying particular attention to the roles played by the slope of the kinetic energy spectrum. It is found that when this slope is shallower than −3, the Lipschitz continuity of analytic solutions (with respect to initial conditions) breaks down as the model resolution increases, unless the viscous range of the real system is resolved—which remains practically impossible. This breakdown leads to the inherent finite-time limit. If, on the other hand, the spectral slope is steeper than −3, then the breakdown does not occur. In this way, the apparent contradiction between the analytic results and Lorenz’s theory is reconciled.