Refine
Year of publication
Document Type
- Article (76)
- Monograph/Edited Volume (3)
- Other (3)
- Conference Proceeding (1)
- Postprint (1)
Keywords
Institute
- Institut für Geowissenschaften (84) (remove)
The PEGASOS project was a major international seismic hazard study, one of the largest ever conducted anywhere in the world, to assess seismic hazard at four nuclear power plant sites in Switzerland. Before the report of this project has become publicly available, a paper attacking both methodology and results has appeared. Since the general scientific readership may have difficulty in assessing this attack in the absence of the report being attacked, we supply a response in the present paper. The bulk of the attack, besides some misconceived arguments about the role of uncertainties in seismic hazard analysis, is carried by some exercises that purport to be validation exercises. In practice, they are no such thing; they are merely independent sets of hazard calculations based on varying assumptions and procedures, often rather questionable, which come up with various different answers which have no particular significance. (C) 2005 Elsevier B.V. All rights reserved
Seismic-hazard assessment is of great importance within the field of engineering seismology. Nowadays, it is common practice to define future seismic demands using probabilistic seismic-hazard analysis (PSHA). Often it is neither obvious nor transparent how PSHA responds to changes in its inputs. In addition, PSHA relies on many uncertain inputs. Sensitivity analysis (SA) is concerned with the assessment and quantification of how changes in the model inputs affect the model response and how input uncertainties influence the distribution of the model response. Sensitivity studies are challenging primarily for computational reasons; hence, the development of efficient methods is of major importance. Powerful local (deterministic) methods widely used in other fields can make SA feasible, even for complex models with a large number of inputs; for example, automatic/algorithmic differentiation (AD)-based adjoint methods. Recently developed derivative-based global sensitivity measures can combine the advantages of such local SA methods with efficient sampling strategies facilitating quantitative global sensitivity analysis (GSA) for complex models. In our study, we propose and implement exactly this combination. It allows an upper bounding of the sensitivities involved in PSHA globally and, therefore, an identification of the noninfluential and the most important uncertain inputs. To the best of our knowledge, it is the first time that derivative-based GSA measures are combined with AD in practice. In addition, we show that first-order uncertainty propagation using the delta method can give satisfactory approximations of global sensitivity measures and allow a rough characterization of the model output distribution in the case of PSHA. An illustrative example is shown for the suggested derivative-based GSA of a PSHA that uses stochastic ground-motion simulations.
Probabilistic seismic-hazard analysis (PSHA) is the current tool of the trade used to estimate the future seismic demands at a site of interest. A modern PSHA represents a complex framework that combines different models with numerous inputs. It is important to understand and assess the impact of these inputs on the model output in a quantitative way. Sensitivity analysis is a valuable tool for quantifying changes of a model output as inputs are perturbed, identifying critical input parameters, and obtaining insight about the model behavior. Differential sensitivity analysis relies on calculating first-order partial derivatives of the model output with respect to its inputs; however, obtaining the derivatives of complex models can be challenging.
In this study, we show how differential sensitivity analysis of a complex framework such as PSHA can be carried out using algorithmic/automatic differentiation (AD). AD has already been successfully applied for sensitivity analyses in various domains such as oceanography and aerodynamics. First, we demonstrate the feasibility of the AD methodology by comparing AD-derived sensitivities with analytically derived sensitivities for a basic case of PSHA using a simple ground-motion prediction equation. Second, we derive sensitivities via AD for a more complex PSHA study using a stochastic simulation approach for the prediction of ground motions. The presented approach is general enough to accommodate more advanced PSHA studies of greater complexity.
Response spectra are of fundamental importance in earthquake engineering and represent a standard measure in seismic design for the assessment of structural performance. However, unlike Fourier spectral amplitudes, the relationship of response spectral amplitudes to seismological source, path, and site characteristics is not immediately obvious and might even be considered counterintuitive for high oscillator frequencies. The understanding of this relationship is nevertheless important for seismic-hazard analysis. The purpose of the present study is to comprehensively characterize the variation of response spectral amplitudes due to perturbations of the causative seismological parameters. This is done by calculating the absolute parameter sensitivities (sensitivity coefficients) defined as the partial derivatives of the model output with respect to its input parameters. To derive sensitivities, we apply algorithmic differentiation (AD). This powerful approach is extensively used for sensitivity analysis of complex models in meteorology or aerodynamics. To the best of our knowledge, AD has not been explored yet in the seismic-hazard context. Within the present study, AD was successfully implemented for a proven and extensively applied simulation program for response spectra (Stochastic Method SIMulation [SMSIM]) using the TAPENADE AD tool. We assess the effects and importance of input parameter perturbations on the shape of response spectra for different regional stochastic models in a quantitative way. Additionally, we perform sensitivity analysis regarding adjustment issues of groundmotion prediction equations.
The ellipticity of Rayleigh surface waves, which is an important parameter characterizing the propagation medium, is studied for several models with increasing complexity. While the main focus lies on theory, practical implications of the use of the horizontal to vertical component ratio (H/V-ratio) to Study the subsurface structure are considered as well. Love's approximation of the ellipticity for an incompressible layer over an incompressible half-space is critically discussed especially concerning its applicability for different impedance contrasts. The main result is an analytically exact formula of H/V for a 2-layer model of compressible media, which is a generalization of Love's formula. It turns out that for a limited range of models Love's approximation can be used also in the general case. (C) 2003 Elsevier B.V. All rights reserved
Empirical ground-motion models used in seismic hazard analysis are commonly derived by regression of observed ground motions against a chosen set of predictor variables. Commonly, the model building process is based on residual analysis and/or expert knowledge and/or opinion, while the quality of the model is assessed by the goodness-of-fit to the data. Such an approach, however, bears no immediate relation to the predictive power of the model and with increasing complexity of the models is increasingly susceptible to the danger of overfitting. Here, a different, primarily data-driven method for the development of ground-motion models is proposed that makes use of the notion of generalization error to counteract the problem of overfitting. Generalization error directly estimates the average prediction error on data not used for the model generation and, thus, is a good criterion to assess the predictive capabilities of a model. The approach taken here makes only few a priori assumptions. At first, peak ground acceleration and response spectrum values are modeled by flexible, nonphysical functions (polynomials) of the predictor variables. The inclusion of a particular predictor and the order of the polynomials are based on minimizing generalization error. The approach is illustrated for the next generation of ground-motion attenuation dataset. The resulting model is rather complex, comprising 48 parameters, but has considerably lower generalization error than functional forms commonly used in ground-motion models. The model parameters have no physical meaning, but a visual interpretation is possible and can reveal relevant characteristics of the data, for example, the Moho bounce in the distance scaling. In a second step, the regression model is approximated by an equivalent stochastic model, making it physically interpretable. The resulting resolvable stochastic model parameters are comparable to published models for western North America. In general, for large datasets generalization error minimization provides a viable method for the development of empirical ground-motion models.
Bayesian networks are a powerful and increasingly popular tool for reasoning under uncertainty, offering intuitive insight into (probabilistic) data-generating processes. They have been successfully applied to many different fields, including bioinformatics. In this paper, Bayesian networks are used to model the joint-probability distribution of selected earthquake, site, and ground-motion parameters. This provides a probabilistic representation of the independencies and dependencies between these variables. In particular, contrary to classical regression, Bayesian networks do not distinguish between target and predictors, treating each variable as random variable. The capability of Bayesian networks to model the ground-motion domain in probabilistic seismic hazard analysis is shown for a generic situation. A Bayesian network is learned based on a subset of the Next Generation Attenuation (NGA) dataset, using 3342 records from 154 earthquakes. Because no prior assumptions about dependencies between particular parameters are made, the learned network displays the most probable model given the data. The learned network shows that the ground-motion parameter (horizontal peak ground acceleration, PGA) is directly connected only to the moment magnitude, Joyner-Boore distance, fault mechanism, source-to-site azimuth, and depth to a shear-wave horizon of 2: 5 km/s (Z2.5). In particular, the effect of V-S30 is mediated by Z2.5. Comparisons of the PGA distributions based on the Bayesian networks with the NGA model of Boore and Atkinson (2008) show a reasonable agreement in ranges of good data coverage.
A Bayesian ground-motion model is presented that directly estimates the coefficients of the model and the correlation between different ground-motion parameters of interest. The model is developed as a multi-level model with levels for earthquake, station and record terms. This separation allows to estimate residuals for each level and thus the estimation of the associated aleatory variability. In particular, the usually estimated within-event variability is split into a between-station and between-record variability. In addition, the covariance structure between different ground-motion parameters of interest is estimated for each level, i.e. directly the between-event, between-station and between-record correlation coefficients are available. All parameters of the model are estimated via Bayesian inference, which allows to assess their epistemic uncertainty in a principled way. The model is developed using a recently compiled European strong-motion database. The target variables are peak ground velocity, peak ground acceleration and spectral acceleration at eight oscillator periods. The model performs well with respect to its residuals, and is similar to other ground-motion models using the same underlying database. The correlation coefficients are similar to those estimated for other parts of the world, with nearby periods having a high correlation. The between-station, between-event and between-record correlations follow generally a similar trend.
The combined passive and active seismic TRANSALP experiment produced an unprecedented high-resolution crustal image of the Eastern Alps between Munich and Venice. The European and Adriatic Mohos (EM and AM, respectively) are clearly imaged with different seismic techniques: near-vertical incidence reflections and receiver functions (RFs). The European Moho dips gently southward from 35 km beneath the northern foreland to a maximum depth of 55 km beneath the central part of the Eastern Alps, whereas the Adriatic Moho is imaged primarily by receiver functions at a relatively constant depth of about 40 km. In both data sets, we have also detected first-order Alpine shear zones, such as the Helvetic detachment, Inntal fault and SubTauern ramp in the north. Apart from the Valsugana thrust, receiver functions in the southern part of the Eastern Alps have also observed a north dipping interface, which may penetrate the entire Adriatic crust [Adriatic Crust Interface (ACI)]. Deep crustal seismicity may be related to the ACI. We interpret the ACI as the currently active retroshear zone in the doubly vergent Alpine collisional belt. (C) 2004 Elsevier B.V. All rights reserved
In the estimate of dispersion with the help of wavelet analysis considerable emphasis has been put on the extraction of the group velocity using the modulus of the wavelet transform. In this paper we give an asymptotic expression of the full propagator in wavelet space that comprises the phase velocity as well. This operator establishes a relationship between the observed signals at two different stations during wave propagation in a dispersive and attenuating medium. Numerical and experimental examples are presented to show that the method accurately models seismic wave dispersion and attenuation
A partially non-ergodic ground-motion prediction equation is estimated for Europe and the Middle East. Therefore, a hierarchical model is presented that accounts for regional differences. For this purpose, the scaling of ground-motion intensity measures is assumed to be similar, but not identical in different regions. This is achieved by assuming a hierarchical model, where some coefficients are treated as random variables which are sampled from an underlying global distribution. The coefficients are estimated by Bayesian inference. This allows one to estimate the epistemic uncertainty in the coefficients, and consequently in model predictions, in a rigorous way. The model is estimated based on peak ground acceleration data from nine different European/Middle Eastern regions. There are large differences in the amount of earthquakes and records in the different regions. However, due to the hierarchical nature of the model, regions with only few data points borrow strength from other regions with more data. This makes it possible to estimate a separate set of coefficients for all regions. Different regionalized models are compared, for which different coefficients are assumed to be regionally dependent. Results show that regionalizing the coefficients for magnitude and distance scaling leads to better performance of the models. The models for all regions are physically sound, even if only very few earthquakes comprise one region.
The Ceres earthquake of 29 September 1969 is the largest known earthquake in southern Africa. Digitized analog recordings from Worldwide Standardized Seismographic Network stations (Powell and Fries, 1964) are used to retrieve the point source moment tensor and the most likely centroid depth of the event using full waveform modeling. A scalar seismic moment of 2.2-2.4 x 10(18) N center dot m corresponding to a moment magnitude of 6.2-6.3 is found. The analysis confirms the pure strike-slip mechanism previously determined from onset polarities by Green and Bloch (1971). Overall good agreement with the fault orientation previously estimated from local aftershock recordings is found. The centroid depth can be constrained to be less than 15 km. In a second analysis step, we use a higher order moment tensor based inversion scheme for simple extended rupture models to constrain the lateral fault dimensions. We find rupture propagated unilaterally for 4.7 s from east-southwest to west-northwest for about 17 km ( average rupture velocity of about 3: 1 km/s).
This study presents results of ambient noise measurements from temporary single station and small-scale array deployments in the northeast of Basle. H/V spectral ratios were determined along various profiles crossing the eastern masterfault of the Rhine Rift Valley and the adjacent sedimentary rift fills. The fundamental H/V peak frequencies are decreasing along the profile towards the eastern direction being consistent with the dip of the tertiary sediments within the rift. Using existing empirical relationships between H/V frequency peaks and the depth of the dominant seismic contrast, derived on basis of the lambda/4-resonance hypothesis and a power law depth dependence of the S-wave velocity, we obtain thicknesses of the rift fill from about 155 m in the west to 280 in in the east. This is in agreement with previous studies. The array analysis of the ambient noise wavefield yielded a stable dispersion relation consistent with Rayleigh wave propagation velocities. We conclude that a significant amount of surface waves is contained in the observed wavefield. The computed ellipticity for fundamental mode Rayleigh waves for the velocity depth models used for the estimation of the sediment thicknesses is in agreement with the observed H/V spectra over a large frequency band
This study presents an unsupervised feature selection and learning approach for the discovery and intuitive imaging of significant temporal patterns in seismic single-station or network recordings. For this purpose, the data are parametrized by real-valued feature vectors for short time windows using standard analysis tools for seismic data, such as frequency-wavenumber, polarization, and spectral analysis. We use Self-Organizing Maps (SOMs) for a data-driven feature selection, visualization and clustering procedure, which is in particular suitable for high-dimensional data sets. Our feature selection method is based on significance testing using the Wald-Wolfowitz runs test for-individual features and on correlation hunting with SOMs in feature subsets. Using synthetics composed of Rayleigh and Love waves and real-world data, we show the robustness and the improved discriminative power of that approach compared to feature subsets manually selected from individual wavefield parametrization methods. Furthermore, the capability of the clustering and visualization techniques to investigate the discrimination of wave phases is shown by means of synthetic waveforms and regional earthquake recordings.
In probabilistic seismic hazard analysis, different ground-motion prediction equations (GMPEs) are commonly combined within a logic tree framework. The selection of appropriate GMPEs, however, is a non-trivial task, especially for regions where strong motion data are sparse and where no indigenous GMPE exists because the set of models needs to capture the whole range of ground-motion uncertainty. In this study we investigate the aggregation of GMPEs into a mixture model with the aim to infer a backbone model that is able to represent the center of the ground-motion distribution in a logic tree analysis. This central model can be scaled up and down to obtain the full range of ground-motion uncertainty. The combination of models into a mixture is inferred from observed ground-motion data. We tested the new approach for Northern Chile, a region for which no indigenous GMPE exists. Mixture models were calculated for interface and intraslab type events individually. For each source type we aggregated eight subduction zone GMPEs using mainly new strong-motion data that were recorded within the Plate Boundary Observatory Chile project and that were processed within this study. We can show that the mixture performs better than any of its component GMPEs, and that it performs comparable to a regression model that was derived for the same dataset. The mixture model seems to represent the median ground motions in that region fairly well. It is thus able to serve as a backbone model for the logic tree.
Slow fourier transform
(2013)
In recent years, H/V measurements have been increasingly used to map the thickness of sediment fill in sedimentary basins in the context of seismic hazard assessment. This parameter is believed to be an important proxy for the site effects in sedimentary basins (e.g. in the Los Angeles basin). Here we present the results of a test using this approach across an active normal fault in a structurally well known situation. Measurements on a 50 km long profile with 1 km station spacing clearly show a change in the frequency of the fundamental peak of H/V ratios with increasing thickness of the sediment layer in the eastern part of the Lower Rhine Embayment. Subsequently, a section of 10 km length across the Erft-Sprung system, a normal fault with ca. 750 m vertical offset, was measured with a station distance of 100 m. Frequencies of the first and second peaks and the first trough in the H/V spectra are used in a simple resonance model to estimate depths of the bedrock. While the frequency of the first peak shows a large scatter for sediment depths larger than ca. 500 m, the frequency of the first trough follows the changing thickness of the sediments across the fault. The lateral resolution is in the range of the station distance of 100 m. A power law for the depth dependence of the S-wave velocity derived from down hole measurements in an earlier study [Budny, 1984] and power laws inverted from dispersion analysis of micro array measurements [Scherbaum et al., 2002] agree with the results from the H/V ratios of this study