Refine
Has Fulltext
- no (5)
Year of publication
- 2014 (5) (remove)
Document Type
- Article (5)
Language
- English (5)
Is part of the Bibliography
- yes (5)
Keywords
- Aleatory variability (2)
- Epistemic uncertainty (2)
- Bayesian networks (1)
- Bayesian non-parametrics (1)
- Empirical ground-motion models (1)
- Europe (1)
- GMPE (1)
- Gaussian Process regression (1)
- Generalization error (1)
- Ground-motion models (1)
Institute
- Institut für Geowissenschaften (5) (remove)
This article presents comparisons among the five ground-motion models described in other articles within this special issue, in terms of data selection criteria, characteristics of the models and predicted peak ground and response spectral accelerations. Comparisons are also made with predictions from the Next Generation Attenuation (NGA) models to which the models presented here have similarities (e.g. a common master database has been used) but also differences (e.g. some models in this issue are nonparametric). As a result of the differing data selection criteria and derivation techniques the predicted median ground motions show considerable differences (up to a factor of two for certain scenarios), particularly for magnitudes and distances close to or beyond the range of the available observations. The predicted influence of style-of-faulting shows much variation among models whereas site amplification factors are more similar, with peak amplification at around 1s. These differences are greater than those among predictions from the NGA models. The models for aleatory variability (sigma), however, are similar and suggest that ground-motion variability from this region is slightly higher than that predicted by the NGA models, based primarily on data from California and Taiwan.
This paper presents a Bayesian non-parametric method based on Gaussian Process (GP) regression to derive ground-motion models for peak-ground parameters and response spectral ordinates. Due to its non-parametric nature there is no need to specify any fixed functional form as in parametric regression models. A GP defines a distribution over functions, which implicitly expresses the uncertainty over the underlying data generating process. An advantage of GP regression is that it is possible to capture the whole uncertainty involved in ground-motion modeling, both in terms of aleatory variability as well as epistemic uncertainty associated with the underlying functional form and data coverage. The distribution over functions is updated in a Bayesian way by computing the posterior distribution of the GP after observing ground-motion data, which in turn can be used to make predictions. The proposed GP regression models is evaluated on a subset of the RESORCE data base for the SIGMA project. The experiments show that GP models have a better generalization error than a simple parametric regression model. A visual assessment of different scenarios demonstrates that the inferred GP models are physically plausible.
We apply and evaluate a recent machine learning method for the automatic classification of seismic waveforms. The method relies on Dynamic Bayesian Networks (DBN) and supervised learning to improve the detection capabilities at 3C seismic stations. A time-frequency decomposition provides the basis for the required signal characteristics we need in order to derive the features defining typical "signal" and "noise" patterns. Each pattern class is modeled by a DBN, specifying the interrelationships of the derived features in the time-frequency plane. Subsequently, the models are trained using previously labeled segments of seismic data. The DBN models can now be compared against in order to determine the likelihood of new incoming seismic waveform segments to be either signal or noise. As the noise characteristics of seismic stations varies smoothly in time (seasonal variation as well as anthropogenic influence), we accommodate in our approach for a continuous adaptation of the DBN model that is associated with the noise class. Given the difficulty for obtaining a golden standard for real data (ground truth) the proof of concept and evaluation is shown by conducting experiments based on 3C seismic data from the International Monitoring Stations, BOSA and LPAZ.
We investigate the usefulness of complex flood damage models for predicting relative damage to residential buildings in a spatial and temporal transfer context. We apply eight different flood damage models to predict relative building damage for five historic flood events in two different regions of Germany. Model complexity is measured in terms of the number of explanatory variables which varies from 1 variable up to 10 variables which are singled out from 28 candidate variables. Model validation is based on empirical damage data, whereas observation uncertainty is taken into consideration. The comparison of model predictive performance shows that additional explanatory variables besides the water depth improve the predictive capability in a spatial and temporal transfer context, i.e., when the models are transferred to different regions and different flood events. Concerning the trade-off between predictive capability and reliability the model structure seem more important than the number of explanatory variables. Among the models considered, the reliability of Bayesian network-based predictions in space-time transfer is larger than for the remaining models, and the uncertainties associated with damage predictions are reflected more completely.
Modern natural hazards research requires dealing with several uncertainties that arise from limited process knowledge, measurement errors, censored and incomplete observations, and the intrinsic randomness of the governing processes. Nevertheless, deterministic analyses are still widely used in quantitative hazard assessments despite the pitfall of misestimating the hazard and any ensuing risks.
In this paper we show that Bayesian networks offer a flexible framework for capturing and expressing a broad range of uncertainties encountered in natural hazard assessments. Although Bayesian networks are well studied in theory, their application to real-world data is far from straightforward, and requires specific tailoring and adaptation of existing algorithms. We offer suggestions as how to tackle frequently arising problems in this context and mainly concentrate on the handling of continuous variables, incomplete data sets, and the interaction of both. By way of three case studies from earthquake, flood, and landslide research, we demonstrate the method of data-driven Bayesian network learning, and showcase the flexibility, applicability, and benefits of this approach.
Our results offer fresh and partly counterintuitive insights into well-studied multivariate problems of earthquake-induced ground motion prediction, accurate flood damage quantification, and spatially explicit landslide prediction at the regional scale. In particular, we highlight how Bayesian networks help to express information flow and independence assumptions between candidate predictors. Such knowledge is pivotal in providing scientists and decision makers with well-informed strategies for selecting adequate predictor variables for quantitative natural hazard assessments.