Refine
Year of publication
Document Type
- Article (79)
- Monograph/Edited Volume (3)
- Other (3)
- Part of Periodical (2)
- Conference Proceeding (1)
- Postprint (1)
Keywords
Modern natural hazards research requires dealing with several uncertainties that arise from limited process knowledge, measurement errors, censored and incomplete observations, and the intrinsic randomness of the governing processes. Nevertheless, deterministic analyses are still widely used in quantitative hazard assessments despite the pitfall of misestimating the hazard and any ensuing risks.
In this paper we show that Bayesian networks offer a flexible framework for capturing and expressing a broad range of uncertainties encountered in natural hazard assessments. Although Bayesian networks are well studied in theory, their application to real-world data is far from straightforward, and requires specific tailoring and adaptation of existing algorithms. We offer suggestions as how to tackle frequently arising problems in this context and mainly concentrate on the handling of continuous variables, incomplete data sets, and the interaction of both. By way of three case studies from earthquake, flood, and landslide research, we demonstrate the method of data-driven Bayesian network learning, and showcase the flexibility, applicability, and benefits of this approach.
Our results offer fresh and partly counterintuitive insights into well-studied multivariate problems of earthquake-induced ground motion prediction, accurate flood damage quantification, and spatially explicit landslide prediction at the regional scale. In particular, we highlight how Bayesian networks help to express information flow and independence assumptions between candidate predictors. Such knowledge is pivotal in providing scientists and decision makers with well-informed strategies for selecting adequate predictor variables for quantitative natural hazard assessments.
In this paper, we propose a method of surface waves characterization based on the deformation of the wavelet transform of the analysed signal. An estimate of the phase velocity (the group velocity) and the attenuation coefficient is carried out using a model-based approach to determine the propagation operator in the wavelet domain, which depends nonlinearly on a set of unknown parameters. These parameters explicitly define the phase velocity, the group velocity and the attenuation. Under the assumption that the difference between waveforms observed at a couple of stations is solely due to the dispersion characteristics and the intrinsic attenuation of the medium, we then seek to find the set of unknown parameters of this model. Finding the model parameters turns out to be that of an optimization problem, which is solved through the minimization of an appropriately defined cost function. We show that, unlike time-frequency methods that exploit only the square modulus of the transform, we can achieve a complete characterization of surface waves in a dispersive and attenuating medium. Using both synthetic examples and experimental data, we also show that it is in principle possible to separate different modes in both the time domain and the frequency domain
Characterization of polarization attributes of seismic waves using continuous wavelet transforms
(2006)
Complex-trace analysis is the method of choice for analyzing polarized data. Because particle motion can be represented by instantaneous attributes that show distinct features for waves of different polarization characteristics, it can be used to separate and characterize these waves. Traditional methods of complex-trace analysis only give the instantaneous attributes as a function of time or frequency. However. for transient wave types or seismic events that overlap in time, an estimate of the polarization parameters requires analysis of the time-frequency dependence of these attributes. We propose a method to map instantaneous polarization attributes of seismic signals in the wavelet domain and explicitly relate these attributes with the wavelet-transform coefficients of the analyzed signal. We compare our method with traditional complex-trace analysis using numerical examples. An advantage of our method is its possibility of performing the complete wave-mode separation/ filtering process in the wavelet domain and its ability to provide the frequency dependence of ellipticity, which contains important information on the subsurface structure. Furthermore, using 2-C synthetic and real seismic shot gathers, we show how to use the method to separate different wave types and identify zones of interfering wave modes
Shallowly situated evaporites in built-up areas are of relevance for urban and cultural development and hydrological regulation. The hazard of sinkholes, subrosion depressions and gypsum karst is often difficult to evaluate and may quickly change with anthropogenic influence. The geophysical exploration of evaporites in metropolitan areas is often not feasible with active industrial techniques. We collect and combine different passive geophysical data as microgravity, ambient vibrations, deformation and hydrological information to study the roof morphology of shallow evaporites beneath Hamburg, Northern Germany. The application of a novel gravity inversion technique leads to a 3-D depth model of the salt diapir under study. We compare the gravity-based depth model to pseudo-depths from H/V measurements and depth estimates from small-scale seismological array data. While the general range and trend of the diapir roof is consistent, a few anomalous regions are identified where H/V pseudo-depths indicate shallower structures not observed in gravity or array data. These are interpreted by shallow residual caprock floaters and zones of increased porosity. The shallow salt structure clearly correlates with a relative subsidence in the order of 2 mm yr(-1). The combined interpretation of roof morphology, yearly subsidence rates, chemical analyses of groundwater and of hydraulic head in aquifers indicates that the salt diapir beneath Hamburg is subject to significant ongoing dissolution that may possibly affect subrosion depressions, sinkhole distribution and land usage. The combined analysis of passive geophysical data may be exemplary for the study of shallow evaporites beneath other urban areas.
This article presents comparisons among the five ground-motion models described in other articles within this special issue, in terms of data selection criteria, characteristics of the models and predicted peak ground and response spectral accelerations. Comparisons are also made with predictions from the Next Generation Attenuation (NGA) models to which the models presented here have similarities (e.g. a common master database has been used) but also differences (e.g. some models in this issue are nonparametric). As a result of the differing data selection criteria and derivation techniques the predicted median ground motions show considerable differences (up to a factor of two for certain scenarios), particularly for magnitudes and distances close to or beyond the range of the available observations. The predicted influence of style-of-faulting shows much variation among models whereas site amplification factors are more similar, with peak amplification at around 1s. These differences are greater than those among predictions from the NGA models. The models for aleatory variability (sigma), however, are similar and suggest that ground-motion variability from this region is slightly higher than that predicted by the NGA models, based primarily on data from California and Taiwan.
Composite ground-motion models and logic trees: Methodology, sensitivities, and uncertainties
(2005)
Logic trees have become a popular tool in seismic hazard studies. Commonly, the models corresponding to the end branches of the complete logic tree in a probabalistic seismic hazard analysis (PSHA) are treated separately until the final calculation of the set of hazard curves. This comes at the price that information regarding sensitivities and uncertainties in the ground-motion sections of the logic tree are only obtainable after disaggregation. Furthermore, from this end-branch model perspective even the designers of the logic tree cannot directly tell what ground-motion scenarios most likely would result from their logic trees for a given earthquake at a particular distance, nor how uncertain these scenarios might be or how they would be affected by the choices of the hazard analyst. On the other hand, all this information is already implicitly present in the logic tree. Therefore, with the ground-motion perspective that we propose in the present article, we treat the ground-motion sections of a complete logic tree for seismic hazard as a single composite model representing the complete state-of-knowledge-and-belief of a particular analyst on ground motion in a particular target region. We implement this view by resampling the ground-motion models represented in the ground-motion sections of the logic tree by Monte Carlo simulation (separately for the median values and the sigma values) and then recombining the sets of simulated values in proportion to their logic-tree branch weights. The quantiles of this resampled composite model provide the hazard analyst and the decision maker with a simple, clear, and quantitative representation of the overall physical meaning of the ground-motion section of a logic tree and the accompanying epistemic uncertainty. Quantiles of the composite model also provide an easy way to analyze the sensitivities and uncertainties related to a given logic-tree model. We illustrate this for a composite ground- motion model for central Europe. Further potential fields of applications are seen wherever individual best estimates of ground motion have to be derived from a set of candidate models, for example, for hazard rnaps, sensitivity studies, or for modeling scenario earthquakes
Seismic-hazard assessment is of great importance within the field of engineering seismology. Nowadays, it is common practice to define future seismic demands using probabilistic seismic-hazard analysis (PSHA). Often it is neither obvious nor transparent how PSHA responds to changes in its inputs. In addition, PSHA relies on many uncertain inputs. Sensitivity analysis (SA) is concerned with the assessment and quantification of how changes in the model inputs affect the model response and how input uncertainties influence the distribution of the model response. Sensitivity studies are challenging primarily for computational reasons; hence, the development of efficient methods is of major importance. Powerful local (deterministic) methods widely used in other fields can make SA feasible, even for complex models with a large number of inputs; for example, automatic/algorithmic differentiation (AD)-based adjoint methods. Recently developed derivative-based global sensitivity measures can combine the advantages of such local SA methods with efficient sampling strategies facilitating quantitative global sensitivity analysis (GSA) for complex models. In our study, we propose and implement exactly this combination. It allows an upper bounding of the sensitivities involved in PSHA globally and, therefore, an identification of the noninfluential and the most important uncertain inputs. To the best of our knowledge, it is the first time that derivative-based GSA measures are combined with AD in practice. In addition, we show that first-order uncertainty propagation using the delta method can give satisfactory approximations of global sensitivity measures and allow a rough characterization of the model output distribution in the case of PSHA. An illustrative example is shown for the suggested derivative-based GSA of a PSHA that uses stochastic ground-motion simulations.