Refine
Document Type
- Doctoral Thesis (4)
- Article (1)
- Postprint (1)
Language
- English (6)
Is part of the Bibliography
- yes (6) (remove)
Keywords
- uncertainty analysis (6) (remove)
Institute
Towards unifying approaches in exposure modelling for scenario-based multi-hazard risk assessments
(2023)
This cumulative thesis presents a stepwise investigation of the exposure modelling process for risk assessment due to natural hazards while highlighting its, to date, not much-discussed importance and associated uncertainties. Although “exposure” refers to a very broad concept of everything (and everyone) that is susceptible to damage, in this thesis it is narrowed down to the modelling of large-area residential building stocks. Classical building exposure models for risk applications have been constructed fully relying on unverified expert elicitation over data sources (e.g., outdated census datasets), and hence have been implicitly assumed to be static in time and in space. Moreover, their spatial representation has also typically been simplified by geographically aggregating the inferred composition onto coarse administrative units whose boundaries do not always capture the spatial variability of the hazard intensities required for accurate risk assessments. These two shortcomings and the related epistemic uncertainties embedded within exposure models are tackled in the first three chapters of the thesis. The exposure composition of large-area residential building stocks is studied on the scope of scenario-based earthquake loss models. Then, the proposal of optimal spatial aggregation areas of exposure models for various hazard-related vulnerabilities is presented, focusing on ground-shaking and tsunami risks. Subsequently, once the experience is gained in the study of the composition and spatial aggregation of exposure for various hazards, this thesis moves towards a multi-hazard context while addressing cumulative damage and losses due to consecutive hazard scenarios. This is achieved by proposing a novel method to account for the pre-existing damage descriptions on building portfolios as a key input to account for scenario-based multi-risk assessment. Finally, this thesis shows how the integration of the aforementioned elements can be used in risk communication practices. This is done through a modular architecture based on the exploration of quantitative risk scenarios that are contrasted with social risk perceptions of the directly exposed communities to natural hazards.
In Chapter 1, a Bayesian approach is proposed to update the prior assumptions on such composition (i.e., proportions per building typology). This is achieved by integrating high-quality real observations and then capturing the intrinsic probabilistic nature of the exposure model. Such observations are accounted as real evidence from both: field inspections (Chapter 2) and freely available data sources to update existing (but outdated) exposure models (Chapter 3). In these two chapters, earthquake scenarios with parametrised ground motion fields were transversally used to investigate the role of such epistemic uncertainties related to the exposure composition through sensitivity analyses. Parametrised scenarios of seismic ground shaking were the hazard input utilised to study the physical vulnerability of building portfolios. The second issue that was investigated, which refers to the spatial aggregation of building exposure models, was investigated within two decoupled vulnerability contexts: due to seismic ground shaking through the integration of remote sensing techniques (Chapter 3); and within a multi-hazard context by integrating the occurrence of associated tsunamis (Chapter 4). Therein, a careful selection of the spatial aggregation entities while pursuing computational efficiency and accuracy in the risk estimates due to such independent hazard scenarios (i.e., earthquake and tsunami) are discussed. Therefore, in this thesis, the physical vulnerability of large-area building portfolios due to tsunamis is considered through two main frames: considering and disregarding the interaction at the vulnerability level, through consecutive and decoupled hazard scenarios respectively, which were then contrasted.
Contrary to Chapter 4, where no cumulative damages are addressed, in Chapter 5, data and approaches, which were already generated in former sections, are integrated with a novel modular method to ultimately study the likely interactions at the vulnerability level on building portfolios. This is tested by evaluating cumulative damages and losses after earthquakes with increasing magnitude followed by their respective tsunamis. Such a novel method is grounded on the possibility of re-using existing fragility models within a probabilistic framework. The same approach is followed in Chapter 6 to forecast the likely cumulative damages to be experienced by a building stock located in a volcanic multi-hazard setting (ash-fall and lahars). In that section, special focus was made on the manner the forecasted loss metrics are communicated to locally exposed communities. Co-existing quantitative scientific approaches (i.e., comprehensive exposure models; explorative risk scenarios involving single and multiple hazards) and semi-qualitative social risk perception (i.e., level of understanding that the exposed communities have about their own risk) were jointly considered. Such an integration ultimately allowed this thesis to also contribute to enhancing preparedness, science divulgation at the local level as well as technology transfer initiatives.
Finally, a synthesis of this thesis along with some perspectives for improvement and future work are presented.
Uncertainties are pervasive in the Earth System modelling. This is not just due to a lack of knowledge about physical processes but has its seeds in intrinsic, i.e. inevitable and irreducible, uncertainties concerning the process of modelling as well. Therefore, it is indispensable to quantify uncertainty in order to determine, which are robust results under this inherent uncertainty. The central goal of this thesis is to explore how uncertainties map on the properties of interest such as phase space topology and qualitative dynamics of the system. We will address several types of uncertainty and apply methods of dynamical systems theory on a trendsetting field of climate research, i.e. the Indian monsoon. For the systematic analysis concerning the different facets of uncertainty, a box model of the Indian monsoon is investigated, which shows a saddle node bifurcation against those parameters that influence the heat budget of the system and that goes along with a regime shift from a wet to a dry summer monsoon. As some of these parameters are crucially influenced by anthropogenic perturbations, the question is whether the occurrence of this bifurcation is robust against uncertainties in parameters and in the number of considered processes and secondly, whether the bifurcation can be reached under climate change. Results indicate, for example, the robustness of the bifurcation point against all considered parameter uncertainties. The possibility of reaching the critical point under climate change seems rather improbable. A novel method is applied for the analysis of the occurrence and the position of the bifurcation point in the monsoon model against parameter uncertainties. This method combines two standard approaches: a bifurcation analysis with multi-parameter ensemble simulations. As a model-independent and therefore universal procedure, this method allows investigating the uncertainty referring to a bifurcation in a high dimensional parameter space in many other models. With the monsoon model the uncertainty about the external influence of El Niño / Southern Oscillation (ENSO) is determined. There is evidence that ENSO influences the variability of the Indian monsoon, but the underlying physical mechanism is discussed controversially. As a contribution to the debate three different hypotheses are tested of how ENSO and the Indian summer monsoon are linked. In this thesis the coupling through the trade winds is identified as key in linking these two key climate constituents. On the basis of this physical mechanism the observed monsoon rainfall data can be reproduced to a great extent. Moreover, this mechanism can be identified in two general circulation models (GCMs) for the present day situation and for future projections under climate change. Furthermore, uncertainties in the process of coupling models are investigated, where the focus is on a comparison of forced dynamics as opposed to fully coupled dynamics. The former describes a particular type of coupling, where the dynamics from one sub-module is substituted by data. Intrinsic uncertainties and constraints are identified that prevent the consistency of a forced model with its fully coupled counterpart. Qualitative discrepancies between the two modelling approaches are highlighted, which lead to an overestimation of predictability and produce artificial predictability in the forced system. The results suggest that bistability and intermittent predictability, when found in a forced model set-up, should always be cross-validated with alternative coupling designs before being taken for granted. All in this, this thesis contributes to the fundamental issue of dealing with uncertainties the climate modelling community is confronted with. Although some uncertainties allow for including them in the interpretation of the model results, intrinsic uncertainties could be identified, which are inevitable within a certain modelling paradigm and are provoked by the specific modelling approach.
It is desirable to reduce the potential threats that result from the variability of nature, such as droughts or heat waves that lead to food shortage, or the other extreme, floods that lead to severe damage. To prevent such catastrophic events, it is necessary to understand, and to be capable of characterising, nature's variability. Typically one aims to describe the underlying dynamics of geophysical records with differential equations. There are, however, situations where this does not support the objectives, or is not feasible, e.g., when little is known about the system, or it is too complex for the model parameters to be identified. In such situations it is beneficial to regard certain influences as random, and describe them with stochastic processes. In this thesis I focus on such a description with linear stochastic processes of the FARIMA type and concentrate on the detection of long-range dependence. Long-range dependent processes show an algebraic (i.e. slow) decay of the autocorrelation function. Detection of the latter is important with respect to, e.g. trend tests and uncertainty analysis. Aiming to provide a reliable and powerful strategy for the detection of long-range dependence, I suggest a way of addressing the problem which is somewhat different from standard approaches. Commonly used methods are based either on investigating the asymptotic behaviour (e.g., log-periodogram regression), or on finding a suitable potentially long-range dependent model (e.g., FARIMA[p,d,q]) and test the fractional difference parameter d for compatibility with zero. Here, I suggest to rephrase the problem as a model selection task, i.e.comparing the most suitable long-range dependent and the most suitable short-range dependent model. Approaching the task this way requires a) a suitable class of long-range and short-range dependent models along with suitable means for parameter estimation and b) a reliable model selection strategy, capable of discriminating also non-nested models. With the flexible FARIMA model class together with the Whittle estimator the first requirement is fulfilled. Standard model selection strategies, e.g., the likelihood-ratio test, is for a comparison of non-nested models frequently not powerful enough. Thus, I suggest to extend this strategy with a simulation based model selection approach suitable for such a direct comparison. The approach follows the procedure of a statistical test, with the likelihood-ratio as the test statistic. Its distribution is obtained via simulations using the two models under consideration. For two simple models and different parameter values, I investigate the reliability of p-value and power estimates obtained from the simulated distributions. The result turned out to be dependent on the model parameters. However, in many cases the estimates allow an adequate model selection to be established. An important feature of this approach is that it immediately reveals the ability or inability to discriminate between the two models under consideration. Two applications, a trend detection problem in temperature records and an uncertainty analysis for flood return level estimation, accentuate the importance of having reliable methods at hand for the detection of long-range dependence. In the case of trend detection, falsely concluding long-range dependence implies an underestimation of a trend and possibly leads to a delay of measures needed to take in order to counteract the trend. Ignoring long-range dependence, although present, leads to an underestimation of confidence intervals and thus to an unjustified belief in safety, as it is the case for the return level uncertainty analysis. A reliable detection of long-range dependence is thus highly relevant in practical applications. Examples related to extreme value analysis are not limited to hydrological applications. The increased uncertainty of return level estimates is a potentially problem for all records from autocorrelated processes, an interesting examples in this respect is the assessment of the maximum strength of wind gusts, which is important for designing wind turbines. The detection of long-range dependence is also a relevant problem in the exploration of financial market volatility. With rephrasing the detection problem as a model selection task and suggesting refined methods for model comparison, this thesis contributes to the discussion on and development of methods for the detection of long-range dependence.
Uncertainty about the sensitivity of the climate system to changes in the Earth’s radiative balance constitutes a primary source of uncertainty for climate projections. Given the continuous increase in atmospheric greenhouse gas concentrations, constraining the uncertainty range in such type of sensitivity is of vital importance. A common measure for expressing this key characteristic for climate models is the climate sensitivity, defined as the simulated change in global-mean equilibrium temperature resulting from a doubling of atmospheric CO2 concentration. The broad range of climate sensitivity estimates (1.5-4.5°C as given in the last Assessment Report of the Intergovernmental Panel on Climate Change, 2001), inferred from comprehensive climate models, illustrates that the strength of simulated feedback mechanisms varies strongly among different models. The central goal of this thesis is to constrain uncertainty in climate sensitivity. For this objective we first generate a large ensemble of model simulations, covering different feedback strengths, and then request their consistency with present-day observational data and proxy-data from the Last Glacial Maximum (LGM). Our analyses are based on an ensemble of fully-coupled simulations, that were realized with a climate model of intermediate complexity (CLIMBER-2). These model versions cover a broad range of different climate sensitivities, ranging from 1.3 to 5.5°C, and have been generated by simultaneously perturbing a set of 11 model parameters. The analysis of the simulated model feedbacks reveals that the spread in climate sensitivity results from different realizations of the feedback strengths in water vapour, clouds, lapse rate and albedo. The calculated spread in the sum of all feedbacks spans almost the entire plausible range inferred from a sampling of more complex models. We show that the requirement for consistency between simulated pre-industrial climate and a set of seven global-mean data constraints represents a comparatively weak test for model sensitivity (the data constrain climate sensitivity to 1.3-4.9°C). Analyses of the simulated latitudinal profile and of the seasonal cycle suggest that additional present-day data constraints, based on these characteristics, do not further constrain uncertainty in climate sensitivity. The novel approach presented in this thesis consists in systematically combining a large set of LGM simulations with data information from reconstructed regional glacial cooling. Irrespective of uncertainties in model parameters and feedback strengths, the set of our model versions reveals a close link between the simulated warming due to a doubling of CO2, and the cooling obtained for the LGM. Based on this close relationship between past and future temperature evolution, we define a method (based on linear regression) that allows us to estimate robust 5-95% quantiles for climate sensitivity. We thus constrain the range of climate sensitivity to 1.3-3.5°C using proxy-data from the LGM at low and high latitudes. Uncertainties in glacial radiative forcing enlarge this estimate to 1.2-4.3°C, whereas the assumption of large structural uncertainties may increase the upper limit by an additional degree. Using proxy-based data constraints for tropical and Antarctic cooling we show that very different absolute temperature changes in high and low latitudes all yield very similar estimates of climate sensitivity. On the whole, this thesis highlights that LGM proxy-data information can offer an effective means of constraining the uncertainty range in climate sensitivity and thus underlines the potential of paleo-climatic data to reduce uncertainty in future climate projections.