Institut für Geowissenschaften
Refine
Has Fulltext
- no (76)
Year of publication
Document Type
- Article (76) (remove)
Is part of the Bibliography
- yes (76)
Keywords
- Ground-motion prediction equation (3)
- Ground-motion prediction equations (2)
- Aleatory variability (1)
- Array seismology (1)
- Attenuation (1)
- Backbone model (1)
- Bagging (1)
- Bayesian networks (1)
- Chile subduction zone (1)
- Correlation (1)
Institute
A partially non-ergodic ground-motion prediction equation is estimated for Europe and the Middle East. Therefore, a hierarchical model is presented that accounts for regional differences. For this purpose, the scaling of ground-motion intensity measures is assumed to be similar, but not identical in different regions. This is achieved by assuming a hierarchical model, where some coefficients are treated as random variables which are sampled from an underlying global distribution. The coefficients are estimated by Bayesian inference. This allows one to estimate the epistemic uncertainty in the coefficients, and consequently in model predictions, in a rigorous way. The model is estimated based on peak ground acceleration data from nine different European/Middle Eastern regions. There are large differences in the amount of earthquakes and records in the different regions. However, due to the hierarchical nature of the model, regions with only few data points borrow strength from other regions with more data. This makes it possible to estimate a separate set of coefficients for all regions. Different regionalized models are compared, for which different coefficients are assumed to be regionally dependent. Results show that regionalizing the coefficients for magnitude and distance scaling leads to better performance of the models. The models for all regions are physically sound, even if only very few earthquakes comprise one region.
In probabilistic seismic hazard analysis, different ground-motion prediction equations (GMPEs) are commonly combined within a logic tree framework. The selection of appropriate GMPEs, however, is a non-trivial task, especially for regions where strong motion data are sparse and where no indigenous GMPE exists because the set of models needs to capture the whole range of ground-motion uncertainty. In this study we investigate the aggregation of GMPEs into a mixture model with the aim to infer a backbone model that is able to represent the center of the ground-motion distribution in a logic tree analysis. This central model can be scaled up and down to obtain the full range of ground-motion uncertainty. The combination of models into a mixture is inferred from observed ground-motion data. We tested the new approach for Northern Chile, a region for which no indigenous GMPE exists. Mixture models were calculated for interface and intraslab type events individually. For each source type we aggregated eight subduction zone GMPEs using mainly new strong-motion data that were recorded within the Plate Boundary Observatory Chile project and that were processed within this study. We can show that the mixture performs better than any of its component GMPEs, and that it performs comparable to a regression model that was derived for the same dataset. The mixture model seems to represent the median ground motions in that region fairly well. It is thus able to serve as a backbone model for the logic tree.
A SSHAC Level 3 Probabilistic Seismic Hazard Analysis for a New-Build Nuclear Site in South Africa
(2015)
A probabilistic seismic hazard analysis has been conducted for a potential nuclear power plant site on the coast of South Africa, a country of low-to-moderate seismicity. The hazard study was conducted as a SSHAC Level 3 process, the first application of this approach outside North America. Extensive geological investigations identified five fault sources with a non-zero probability of being seismogenic. Five area sources were defined for distributed seismicity, the least active being the host zone for which the low recurrence rates for earthquakes were substantiated through investigations of historical seismicity. Empirical ground-motion prediction equations were adjusted to a horizon within the bedrock at the site using kappa values inferred from weak-motion analyses. These adjusted models were then scaled to create new equations capturing the range of epistemic uncertainty in this region with no strong motion recordings. Surface motions were obtained by convolving the bedrock motions with site amplification functions calculated using measured shear-wave velocity profiles.
Probabilistic seismic-hazard analysis (PSHA) is the current tool of the trade used to estimate the future seismic demands at a site of interest. A modern PSHA represents a complex framework that combines different models with numerous inputs. It is important to understand and assess the impact of these inputs on the model output in a quantitative way. Sensitivity analysis is a valuable tool for quantifying changes of a model output as inputs are perturbed, identifying critical input parameters, and obtaining insight about the model behavior. Differential sensitivity analysis relies on calculating first-order partial derivatives of the model output with respect to its inputs; however, obtaining the derivatives of complex models can be challenging.
In this study, we show how differential sensitivity analysis of a complex framework such as PSHA can be carried out using algorithmic/automatic differentiation (AD). AD has already been successfully applied for sensitivity analyses in various domains such as oceanography and aerodynamics. First, we demonstrate the feasibility of the AD methodology by comparing AD-derived sensitivities with analytically derived sensitivities for a basic case of PSHA using a simple ground-motion prediction equation. Second, we derive sensitivities via AD for a more complex PSHA study using a stochastic simulation approach for the prediction of ground motions. The presented approach is general enough to accommodate more advanced PSHA studies of greater complexity.
Empirical ground-motion prediction equations (GMPEs) require adjustment to make them appropriate for site-specific scenarios. However, the process of making such adjustments remains a challenge. This article presents a holistic framework for the development of a response spectral GMPE that is easily adjustable to different seismological conditions and does not suffer from the practical problems associated with adjustments in the response spectral domain. The approach for developing a response spectral GMPE is unique, because it combines the predictions of empirical models for the two model components that characterize the spectral and temporal behavior of the ground motion. Essentially, as described in its initial form by Bora et al. (2014), the approach consists of an empirical model for the Fourier amplitude spectrum (FAS) and a model for the ground-motion duration. These two components are combined within the random vibration theory framework to obtain predictions of response spectral ordinates. In addition, FAS corresponding to individual acceleration records are extrapolated beyond the useable frequencies using the stochastic FAS model, obtained by inversion as described in Edwards and Fah (2013a). To that end, a (oscillator) frequency-dependent duration model, consistent with the empirical FAS model, is also derived. This makes it possible to generate a response spectral model that is easily adjustable to different sets of seismological parameters, such as the stress parameter Delta sigma, quality factor Q, and kappa kappa(0). The dataset used in Bora et al. (2014), a subset of the RESORCE-2012 database, is considered for the present analysis. Based upon the range of the predictor variables in the selected dataset, the present response spectral GMPE should be considered applicable over the magnitude range of 4 <= M-w <= 7.6 at distances <= 200 km.
A Bayesian ground-motion model is presented that directly estimates the coefficients of the model and the correlation between different ground-motion parameters of interest. The model is developed as a multi-level model with levels for earthquake, station and record terms. This separation allows to estimate residuals for each level and thus the estimation of the associated aleatory variability. In particular, the usually estimated within-event variability is split into a between-station and between-record variability. In addition, the covariance structure between different ground-motion parameters of interest is estimated for each level, i.e. directly the between-event, between-station and between-record correlation coefficients are available. All parameters of the model are estimated via Bayesian inference, which allows to assess their epistemic uncertainty in a principled way. The model is developed using a recently compiled European strong-motion database. The target variables are peak ground velocity, peak ground acceleration and spectral acceleration at eight oscillator periods. The model performs well with respect to its residuals, and is similar to other ground-motion models using the same underlying database. The correlation coefficients are similar to those estimated for other parts of the world, with nearby periods having a high correlation. The between-station, between-event and between-record correlations follow generally a similar trend.
Modern natural hazards research requires dealing with several uncertainties that arise from limited process knowledge, measurement errors, censored and incomplete observations, and the intrinsic randomness of the governing processes. Nevertheless, deterministic analyses are still widely used in quantitative hazard assessments despite the pitfall of misestimating the hazard and any ensuing risks.
In this paper we show that Bayesian networks offer a flexible framework for capturing and expressing a broad range of uncertainties encountered in natural hazard assessments. Although Bayesian networks are well studied in theory, their application to real-world data is far from straightforward, and requires specific tailoring and adaptation of existing algorithms. We offer suggestions as how to tackle frequently arising problems in this context and mainly concentrate on the handling of continuous variables, incomplete data sets, and the interaction of both. By way of three case studies from earthquake, flood, and landslide research, we demonstrate the method of data-driven Bayesian network learning, and showcase the flexibility, applicability, and benefits of this approach.
Our results offer fresh and partly counterintuitive insights into well-studied multivariate problems of earthquake-induced ground motion prediction, accurate flood damage quantification, and spatially explicit landslide prediction at the regional scale. In particular, we highlight how Bayesian networks help to express information flow and independence assumptions between candidate predictors. Such knowledge is pivotal in providing scientists and decision makers with well-informed strategies for selecting adequate predictor variables for quantitative natural hazard assessments.
The Ceres earthquake of 29 September 1969 is the largest known earthquake in southern Africa. Digitized analog recordings from Worldwide Standardized Seismographic Network stations (Powell and Fries, 1964) are used to retrieve the point source moment tensor and the most likely centroid depth of the event using full waveform modeling. A scalar seismic moment of 2.2-2.4 x 10(18) N center dot m corresponding to a moment magnitude of 6.2-6.3 is found. The analysis confirms the pure strike-slip mechanism previously determined from onset polarities by Green and Bloch (1971). Overall good agreement with the fault orientation previously estimated from local aftershock recordings is found. The centroid depth can be constrained to be less than 15 km. In a second analysis step, we use a higher order moment tensor based inversion scheme for simple extended rupture models to constrain the lateral fault dimensions. We find rupture propagated unilaterally for 4.7 s from east-southwest to west-northwest for about 17 km ( average rupture velocity of about 3: 1 km/s).
This article presents comparisons among the five ground-motion models described in other articles within this special issue, in terms of data selection criteria, characteristics of the models and predicted peak ground and response spectral accelerations. Comparisons are also made with predictions from the Next Generation Attenuation (NGA) models to which the models presented here have similarities (e.g. a common master database has been used) but also differences (e.g. some models in this issue are nonparametric). As a result of the differing data selection criteria and derivation techniques the predicted median ground motions show considerable differences (up to a factor of two for certain scenarios), particularly for magnitudes and distances close to or beyond the range of the available observations. The predicted influence of style-of-faulting shows much variation among models whereas site amplification factors are more similar, with peak amplification at around 1s. These differences are greater than those among predictions from the NGA models. The models for aleatory variability (sigma), however, are similar and suggest that ground-motion variability from this region is slightly higher than that predicted by the NGA models, based primarily on data from California and Taiwan.
We investigate the usefulness of complex flood damage models for predicting relative damage to residential buildings in a spatial and temporal transfer context. We apply eight different flood damage models to predict relative building damage for five historic flood events in two different regions of Germany. Model complexity is measured in terms of the number of explanatory variables which varies from 1 variable up to 10 variables which are singled out from 28 candidate variables. Model validation is based on empirical damage data, whereas observation uncertainty is taken into consideration. The comparison of model predictive performance shows that additional explanatory variables besides the water depth improve the predictive capability in a spatial and temporal transfer context, i.e., when the models are transferred to different regions and different flood events. Concerning the trade-off between predictive capability and reliability the model structure seem more important than the number of explanatory variables. Among the models considered, the reliability of Bayesian network-based predictions in space-time transfer is larger than for the remaining models, and the uncertainties associated with damage predictions are reflected more completely.
Inferring a ground-motion prediction equation (GMPE) for a region in which only a small number of seismic events has been observed is a challenging task. A response to this data scarcity is to utilise data from other regions in the hope that there exist common patterns in the generation of ground motion that can contribute to the development of a GMPE for the region in question. This is not an unreasonable course of action since we expect regional GMPEs to be related to each other. In this work we model this relatedness by assuming that the regional GMPEs occupy a common low-dimensional manifold in the space of all possible GMPEs. As a consequence, the GMPEs are fitted in a joint manner and not independent of each other, borrowing predictive strength from each other's regional datasets. Experimentation on a real dataset shows that the manifold assumption displays better predictive performance over fitting regional GMPEs independent of each other. (C) 2014 Elsevier Ltd. All rights reserved.
Aleatory variability in ground-motion prediction, represented by the standard deviation (sigma) of a ground-motion prediction equation, exerts a very strong influence on the results of probabilistic seismic-hazard analysis (PSHA). This is especially so at the low annual exceedance frequencies considered for nuclear facilities; in these cases, even small reductions in sigma can have a marked effect on the hazard estimates. Proper separation and quantification of aleatory variability and epistemic uncertainty can lead to defensible reductions in sigma. One such approach is the single-station sigma concept, which removes that part of sigma corresponding to repeatable site-specific effects. However, the site-to-site component must then be constrained by site-specific measurements or else modeled as epistemic uncertainty and incorporated into the modeling of site effects. The practical application of the single-station sigma concept, including the characterization of the dynamic properties of the site and the incorporation of site-response effects into the hazard calculations, is illustrated for a PSHA conducted at a rock site under consideration for the potential construction of a nuclear power plant.
Response spectra are of fundamental importance in earthquake engineering and represent a standard measure in seismic design for the assessment of structural performance. However, unlike Fourier spectral amplitudes, the relationship of response spectral amplitudes to seismological source, path, and site characteristics is not immediately obvious and might even be considered counterintuitive for high oscillator frequencies. The understanding of this relationship is nevertheless important for seismic-hazard analysis. The purpose of the present study is to comprehensively characterize the variation of response spectral amplitudes due to perturbations of the causative seismological parameters. This is done by calculating the absolute parameter sensitivities (sensitivity coefficients) defined as the partial derivatives of the model output with respect to its input parameters. To derive sensitivities, we apply algorithmic differentiation (AD). This powerful approach is extensively used for sensitivity analysis of complex models in meteorology or aerodynamics. To the best of our knowledge, AD has not been explored yet in the seismic-hazard context. Within the present study, AD was successfully implemented for a proven and extensively applied simulation program for response spectra (Stochastic Method SIMulation [SMSIM]) using the TAPENADE AD tool. We assess the effects and importance of input parameter perturbations on the shape of response spectra for different regional stochastic models in a quantitative way. Additionally, we perform sensitivity analysis regarding adjustment issues of groundmotion prediction equations.
Bayesian networks are a powerful and increasingly popular tool for reasoning under uncertainty, offering intuitive insight into (probabilistic) data-generating processes. They have been successfully applied to many different fields, including bioinformatics. In this paper, Bayesian networks are used to model the joint-probability distribution of selected earthquake, site, and ground-motion parameters. This provides a probabilistic representation of the independencies and dependencies between these variables. In particular, contrary to classical regression, Bayesian networks do not distinguish between target and predictors, treating each variable as random variable. The capability of Bayesian networks to model the ground-motion domain in probabilistic seismic hazard analysis is shown for a generic situation. A Bayesian network is learned based on a subset of the Next Generation Attenuation (NGA) dataset, using 3342 records from 154 earthquakes. Because no prior assumptions about dependencies between particular parameters are made, the learned network displays the most probable model given the data. The learned network shows that the ground-motion parameter (horizontal peak ground acceleration, PGA) is directly connected only to the moment magnitude, Joyner-Boore distance, fault mechanism, source-to-site azimuth, and depth to a shear-wave horizon of 2: 5 km/s (Z2.5). In particular, the effect of V-S30 is mediated by Z2.5. Comparisons of the PGA distributions based on the Bayesian networks with the NGA model of Boore and Atkinson (2008) show a reasonable agreement in ranges of good data coverage.
One of the key challenges in the context of local site effect studies is the determination of frequencies where the shakeability of the ground is enhanced. In this context, the H/V technique has become increasingly popular and peak frequencies of H/V spectral ratio are sometimes interpreted as resonance frequencies of the transmission response. In the present study, assuming that Rayleigh surface wave is dominant in H/V spectral ratio, we analyse theoretically under which conditions this may be justified and when not. We focus on 'layer over half-space' models which, although seemingly simple, capture many aspects of local site effects in real sedimentary structures. Our starting point is the ellipticity of Rayleigh waves. We use the exact formula of the H/V-ratio presented by Malischewsky & Scherbaum (2004) to investigate the main characteristics of peak and trough frequencies. We present a simple formula illustrating if and where H/V-ratio curves have sharp peaks in dependence of model parameters. In addition, we have constructed a map, which demonstrates the relation between the H/V-peak frequency and the peak frequency of the transmission response in the domain of the layer's Poisson ratio and the impedance contrast. Finally, we have derived maps showing the relationship between the H/V-peak and trough frequency and key parameters of the model such as impedance contrast. These maps are seen as diagnostic tools, which can help to guide the interpretation of H/V spectral ratio diagrams in the context of site effect studies.
Logic trees have become the most popular tool for the quantification of epistemic uncertainties in probabilistic seismic hazard assessment (PSHA). In a logic-tree framework, epistemic uncertainty is expressed in a set of branch weights, by which an expert or an expert group assigns degree-of-belief values to the applicability of the corresponding branch models. Despite the popularity of logic-trees, however, one finds surprisingly few clear commitments to what logic-tree branch weights are assumed to be (even by hazard analysts designing logic trees). In the present paper we argue that it is important for hazard analysts to accept the probabilistic framework from the beginning for assigning logic-tree branch weights. In other words, to accept that logic-tree branch weights are probabilities in the axiomatic sense, independent of one's preference for the philosophical interpretation of probabilities. We demonstrate that interpreting logic-tree branch weights merely as a numerical measure of "model quality," which are then subsequently normalized to sum up to unity, will with increasing number of models inevitably lead to an apparent insensitivity of hazard curves on the logic-tree branch weights, which may even be mistaken for robustness of the results. Finally, we argue that assigning logic-tree branch weights in a sequential fashion may improve their logical consistency.
Magnitude estimation for microseismicity induced during the KTB 2004/2005 injection experiment
(2011)
We determined the magnitudes of 2540 microseismic events measured at one single 3C borehole geophone at the German Deep Drilling Site (known by the German acronym, KTB) during the injection phase 2004/2005. For this task we developed a three-step approach. First, we estimated local magnitudes of 104 larger events with a standard method based on amplitude measurements at near-surface stations. Second, we investigated a series of parameters to characterize the size of these events using the seismograms of the borehole sensor, and we compared them statistically with the local magnitudes. Third, we extrapolated the regression curve to obtain the magnitudes of 2436 events that were only measured at the borehole geophone. This method improved the magnitude of completeness for the KTB data set by more than one order down to M = -2.75. The resulting b-value for all events was 0.78, which is similar to the b-value obtained from taking only the greater events with standard local magnitude estimation from near-surface stations, b = 0.86. The more complete magnitude catalog was required to study the magnitude distribution with time and to characterize the seismotectonic state of the KTB injection site. The event distribution with time was consistent with prediction from theory assuming pore pressure diffusion as the underlying mechanism to trigger the events. The value we obtained for the seismogenic index of -4 suggested that the seismic hazard potential at the KTB site is comparatively low.
Enhancing the resolution and accuracy of surface ground-penetrating radar (GPR) reflection data by inverse filtering to recover a zero-phased band-limited reflectivity image requires a deconvolution technique that takes the mixed-phase character of the embedded wavelet into account. In contrast, standard stochastic deconvolution techniques assume that the wavelet is minimum phase and, hence, often meet with limited success when applied to GPR data. We present a new general-purpose blind deconvolution algorithm for mixed-phase wavelet estimation and deconvolution that (1) uses the parametrization of a mixed-phase wavelet as the convolution of the wavelet's minimum-phase equivalent with a dispersive all-pass filter, (2) includes prior information about the wavelet to be estimated in a Bayesian framework, and (3) relies on the assumption of a sparse reflectivity. Solving the normal equations using the data autocorrelation function provides an inverse filter that optimally removes the minimum-phase equivalent of the wavelet from the data, which leaves traces with a balanced amplitude spectrum but distorted phase. To compensate for the remaining phase errors, we invert in the frequency domain for an all-pass filter thereby taking advantage of the fact that the action of the all-pass filter is exclusively contained in its phase spectrum. A key element of our algorithm and a novelty in blind deconvolution is the inclusion of prior information that allows resolving ambiguities in polarity and timing that cannot be resolved using the sparseness measure alone. We employ a global inversion approach for non-linear optimization to find the all-pass filter phase values for each signal frequency. We tested the robustness and reliability of our algorithm on synthetic data with different wavelets, 1-D reflectivity models of different complexity, varying levels of added noise, and different types of prior information. When applied to realistic synthetic 2-D data and 2-D field data, we obtain images with increased temporal resolution compared to the results of standard processing.
Tsunami early warning (TEW) is a challenging task as a decision has to be made within few minutes on the basis of incomplete and error-prone data. Deterministic warning systems have difficulties in integrating and quantifying the intrinsic uncertainties. In contrast, probabilistic approaches provide a framework that handles uncertainties in a natural way. Recently, we have proposed a method using Bayesian networks (BNs) that takes into account the uncertainties of seismic source parameter estimates in TEW. In this follow-up study, the method is applied to 10 recent large earthquakes offshore Sumatra and tested for its performance. We have evaluated both the general model performance given the best knowledge we have today about the source parameters of the 10 events and the corresponding response on seismic source information evaluated in real-time. We find that the resulting site-specific warning level probabilities represent well the available tsunami wave measurements and observations. Difficulties occur in the real-time tsunami assessment if the moment magnitude estimate is severely over- or underestimated. In general, the probabilistic analysis reveals a considerably large range of uncertainties in the near-field TEW. By quantifying the uncertainties the BN analysis provides important additional information to a decision maker in a warning centre to deal with the complexity in TEW and to reason under uncertainty.