Refine
Year of publication
Document Type
- Article (76)
- Monograph/Edited Volume (3)
- Other (3)
- Conference Proceeding (1)
- Postprint (1)
Keywords
Institute
- Institut für Geowissenschaften (84) (remove)
Records from ocean bottom seismometers (OBSs) are highly contaminated by noise, which is much stronger
compared to data from most land stations, especially on the horizontal components. As a consequence, the high energy of the oceanic noise at frequencies below 1 Hz considerably complicates the analysis of the teleseismic earthquake signals recorded by OBSs.
Previous studies suggested different approaches to remove low-frequency noises from OBS recordings but mainly focused on the vertical component. The records of horizontal components, which are crucial for the application of many methods in passive seismological analysis of body and surface waves, could not be much improved in the teleseismic frequency band. Here we introduce a noise reduction method, which is derived from the harmonic–percussive separation algorithms used in Zali et al. (2021), in order to separate long-lasting narrowband signals from broadband transients in the OBS signal. This leads to significant noise reduction of OBS records on both the vertical and horizontal components and increases the earthquake signal-to-noise ratio (SNR) without distortion of the broadband earthquake waveforms. This is demonstrated through tests with synthetic data. Both SNR and cross-correlation coefficients showed significant improvements for different realistic noise realizations. The application of denoised signals in surface wave analysis and receiver functions is discussed through tests with synthetic and real data.
Records from ocean bottom seismometers (OBSs) are highly contaminated by noise, which is much stronger compared to data from most land stations, especially on the horizontal components. As a consequence, the high energy of the oceanic noise at frequencies below 1 Hz considerably complicates the analysis of the teleseismic earthquake signals recorded by OBSs.
Previous studies suggested different approaches to remove low-frequency noises from OBS recordings but mainly focused on the vertical component. The records of horizontal components, which are crucial for the application of many methods in passive seismological analysis of body and surface waves, could not be much improved in the teleseismic frequency band. Here we introduce a noise reduction method, which is derived from the harmonic–percussive separation algorithms used in Zali et al. (2021), in order to separate long-lasting narrowband signals from broadband transients in the OBS signal. This leads to significant noise reduction of OBS records on both the vertical and horizontal components and increases the earthquake signal-to-noise ratio (SNR) without distortion of the broadband earthquake waveforms. This is demonstrated through tests with synthetic data. Both SNR and cross-correlation coefficients showed significant improvements for different realistic noise realizations. The application of denoised signals in surface wave analysis and receiver functions is discussed through tests with synthetic and real data.
Volcanic tremor extraction and earthquake detection using music information retrieval algorithms
(2021)
Volcanic tremor signals are usually observed before or during volcanic eruptions and must be monitored to evaluate the volcanic activity. A challenge in studying seismic signals of volcanic origin is the coexistence of transient signal swarms and long-lasting volcanic tremor signals. Separating transient events from volcanic tremors can, therefore, contrib-ute to improving upon our understanding of the underlying physical processes. Exploiting the idea of harmonic-percussive separation in musical signal processing, we develop a method to extract the harmonic volcanic tremor signals and to detect tran-sient events from seismic recordings. Based on the similarity properties of spectrogram frames in the time-frequency domain, we decompose the signal into two separate spec-trograms representing repeating (harmonic) and nonrepeating (transient) patterns, which correspond to volcanic tremor signals and earthquake signals, respectively. We reconstruct the harmonic tremor signal in the time domain from the complex spectrogram of the repeating pattern by only considering the phase components for the frequency range in which the tremor amplitude spectrum is significantly contribut-ing to the energy of the signal. The reconstructed signal is, therefore, clean tremor signal without transient events. Furthermore, we derive a characteristic function suitable for the detection of tran-sient events (e.g., earthquakes) by integrating amplitudes of the nonrepeating spectro-gram over frequency at each time frame. Considering transient events like earthquakes, 78% of the events are detected for signal-to-noise ratio = 0.1 in our semisynthetic tests. In addition, we compared the number of detected earthquakes using our method for one month of continuous data recorded during the Holuhraun 2014-2015 eruption in Iceland with the bulletin presented in Agustsdottir et al. (2019). Our single station event detection algorithm identified 84% of the bulletin events. Moreover, we detected a total of 12,619 events, which is more than twice the number of the bulletin events.
In this article, we address the question of how observed ground-motion data can most effectively be modeled for engineering seismological purposes. Toward this goal, we use a data-driven method, based on a deep-learning autoencoder with a variable number of nodes in the bottleneck layer, to determine how many parameters are needed to reconstruct synthetic and observed ground-motion data in terms of their median values and scatter. The reconstruction error as a function of the number of nodes in the bottleneck is used as an indicator of the underlying dimensionality of ground-motion data, that is, the minimum number of predictor variables needed in a ground-motion model. Two synthetic and one observed datasets are studied to prove the performance of the proposed method. We find that mapping ground-motion data to a 2D manifold primarily captures magnitude and distance information and is suited for an approximate data reconstruction. The data reconstruction improves with an increasing number of bottleneck nodes of up to three and four, but it saturates if more nodes are added to the bottleneck.
In this study we examine the tonal organization of a series of recordings of liturgical chants, sung in 1966 by the Georgian master singer Artem Erkomaishvili. This dataset is the oldest corpus of Georgian chants from which the time synchronous F0-trajectories for all three voices have been reliably determined (Müller et al. 2017). It is therefore of outstanding importance for the understanding of the tuning principles of traditional Georgian vocal music.
The aim of the present study is to use various computational methods to analyze what these recordings can contribute to the ongoing scientific dispute about traditional Georgian tuning systems. Starting point for the present analysis is the re-release of the original audio data together with estimated fundamental frequency (F0) trajectories for each of the three voices, beat annotations, and digital scores (Rosenzweig et al. 2020). We present synoptic models for the pitch and the harmonic interval distributions, which are the first of such models for which the complete Erkomaishvili dataset was used. We show that these distributions can be very compactly be expressed as Gaussian mixture models, anchored on discrete sets of pitch or interval values for the pitch and interval distributions, respectively. As part of our study we demonstrate that these pitch values, which we refer to as scale pitches, and which are determined as the mean values of the Gaussian mixture elements, define the scale degrees of the melodic sound scales which build the skeleton of Artem Erkomaishvili’s intonation. The observation of consistent pitch bending of notes in melodic phrases, which appear in identical form in a group of chants, as well as the observation of harmonically driven intonation adjustments, which are clearly documented for all pure harmonic intervals, demonstrate that Artem Erkomaishvili intentionally deviates from the scale pitch skeleton quite freely. As a central result of our study, we proof that this melodic freedom is always constrained by the attracting influence of the scale pitches. Deviations of the F0-values of individual note events from the scale pitches at one instance of time are compensated for in the subsequent melodic steps. This suggests a deviation-compensation mechanism at the core of Artem Erkomaishvili’s melody generation, which clearly honors the scales but still allows for a large degree of melodic flexibility. This model, which summarizes all partial aspects of our analysis, is consistent with the melodic scale models derived from the observed pitch distributions, as well as with the melodic and harmonic interval distributions. In addition to the tangible results of our work, we believe that our work has general implications for the determination of tuning models from audio data, in particular for non-tempered music.
Adjustment of median ground motion prediction equations (GMPEs) from one region to another region is one of the major challenges within the current practice of seismic hazard analysis. In our approach of generating response spectra, we derive two separate empirical models for a) Fourier amplitude spectrum (FAS) and b) duration of ground motion. To calculate response spectra, the two models are combined within the random vibration theory (RVT) framework. The models are calibrated on recordings obtained from shallow crustal earthquakes in active tectonic regions. We use a subset of NGA-West2 database with M3.2-7.9 earthquakes at distances 0-300 km. The NGA-West2 database expanded over a wide magnitude and distance range facilitates a better constraint over derived models. A frequency-dependent duration model is derived to obtain adjustable response spectral ordinates. Excellent comparison of our approach with other NGA-West2 models implies that it can also be used as a stand-alone model.
Seismic-hazard assessment is of great importance within the field of engineering seismology. Nowadays, it is common practice to define future seismic demands using probabilistic seismic-hazard analysis (PSHA). Often it is neither obvious nor transparent how PSHA responds to changes in its inputs. In addition, PSHA relies on many uncertain inputs. Sensitivity analysis (SA) is concerned with the assessment and quantification of how changes in the model inputs affect the model response and how input uncertainties influence the distribution of the model response. Sensitivity studies are challenging primarily for computational reasons; hence, the development of efficient methods is of major importance. Powerful local (deterministic) methods widely used in other fields can make SA feasible, even for complex models with a large number of inputs; for example, automatic/algorithmic differentiation (AD)-based adjoint methods. Recently developed derivative-based global sensitivity measures can combine the advantages of such local SA methods with efficient sampling strategies facilitating quantitative global sensitivity analysis (GSA) for complex models. In our study, we propose and implement exactly this combination. It allows an upper bounding of the sensitivities involved in PSHA globally and, therefore, an identification of the noninfluential and the most important uncertain inputs. To the best of our knowledge, it is the first time that derivative-based GSA measures are combined with AD in practice. In addition, we show that first-order uncertainty propagation using the delta method can give satisfactory approximations of global sensitivity measures and allow a rough characterization of the model output distribution in the case of PSHA. An illustrative example is shown for the suggested derivative-based GSA of a PSHA that uses stochastic ground-motion simulations.
We have analyzed the recently developed pan-European strong motion database, RESORCE-2012: spectral parameters, such as stress drop (stress parameter, Delta sigma), anelastic attenuation (Q), near surface attenuation (kappa(0)) and site amplification have been estimated from observed strong motion recordings. The selected dataset exhibits a bilinear distance-dependent Q model with average kappa(0) value 0.0308 s. Strong regional variations in inelastic attenuation were also observed: frequency-independent Q(0) of 1462 and 601 were estimated for Turkish and Italian data respectively. Due to the strong coupling between Q and kappa(0), the regional variations in Q have strong impact on the estimation of near surface attenuation kappa(0). kappa(0) was estimated as 0.0457 and 0.0261 s for Turkey and Italy respectively. Furthermore, a detailed analysis of the variability in estimated kappa(0) revealed significant within-station variability. The linear site amplification factors were constrained from residual analysis at each station and site-class type. Using the regional Q(0) model and a site-class specific kappa(0), seismic moments (M-0) and source corner frequencies f (c) were estimated from the site corrected empirical Fourier spectra. Delta sigma did not exhibit magnitude dependence. The median Delta sigma value was obtained as 5.75 and 5.65 MPa from inverted and database magnitudes respectively. A comparison of response spectra from the stochastic model (derived herein) with that from (regional) ground motion prediction equations (GMPEs) suggests that the presented seismological parameters can be used to represent the corresponding seismological attributes of the regional GMPEs in a host-to-target adjustment framework. The analysis presented herein can be considered as an update of that undertaken for the previous Euro-Mediterranean strong motion database presented by Edwards and Fah (Geophys J Int 194(2):1190-1202, 2013a).
A partially non-ergodic ground-motion prediction equation is estimated for Europe and the Middle East. Therefore, a hierarchical model is presented that accounts for regional differences. For this purpose, the scaling of ground-motion intensity measures is assumed to be similar, but not identical in different regions. This is achieved by assuming a hierarchical model, where some coefficients are treated as random variables which are sampled from an underlying global distribution. The coefficients are estimated by Bayesian inference. This allows one to estimate the epistemic uncertainty in the coefficients, and consequently in model predictions, in a rigorous way. The model is estimated based on peak ground acceleration data from nine different European/Middle Eastern regions. There are large differences in the amount of earthquakes and records in the different regions. However, due to the hierarchical nature of the model, regions with only few data points borrow strength from other regions with more data. This makes it possible to estimate a separate set of coefficients for all regions. Different regionalized models are compared, for which different coefficients are assumed to be regionally dependent. Results show that regionalizing the coefficients for magnitude and distance scaling leads to better performance of the models. The models for all regions are physically sound, even if only very few earthquakes comprise one region.
The functional form of empirical response spectral ground-motion prediction equations (GMPEs) is often derived using concepts borrowed from Fourier spectral modeling of ground motion. As these GMPEs are subsequently calibrated with empirical observations, this may not appear to pose any major problems in the prediction of ground motion for a particular earthquake scenario. However, the assumption that Fourier spectral concepts persist for response spectra can lead to undesirable consequences when it comes to the adjustment of response spectral GMPEs to represent conditions not covered in the original empirical data set. In this context, a couple of important questions arise, for example, what are the distinctions and/or similarities between Fourier and response spectra of ground motions? And, if they are different, then what is the mechanism responsible for such differences and how do adjustments that are made to Fourier amplitude spectrum (FAS) manifest in response spectra? The present article explores the relationship between the Fourier and response spectrum of ground motion by using random vibration theory (RVT). With a simple Brune (1970, 1971) source model, RVT-generated acceleration spectra for a fixed magnitude and distance scenario are used. The RVT analyses reveal that the scaling of low oscillator-frequency response spectral ordinates can be treated as being equivalent to the scaling of the corresponding Fourier spectral ordinates. However, the high oscillator-frequency response spectral ordinates are controlled by a rather wide band of Fourier spectral ordinates. In fact, the peak ground acceleration, counter to the popular perception that it is a reflection of the high-frequency characteristics of ground motion, is controlled by the entire Fourier spectrum of ground motion. Additionally, this article demonstrates how an adjustment made to FAS is similar or different to the same adjustment made to response spectral ordinates. For this purpose, two cases: adjustments to the stress parameter (Delta sigma) (source term), and adjustments to the attributes reflecting site response (V-S - kappa(0)) are considered.
A SSHAC Level 3 Probabilistic Seismic Hazard Analysis for a New-Build Nuclear Site in South Africa
(2015)
A probabilistic seismic hazard analysis has been conducted for a potential nuclear power plant site on the coast of South Africa, a country of low-to-moderate seismicity. The hazard study was conducted as a SSHAC Level 3 process, the first application of this approach outside North America. Extensive geological investigations identified five fault sources with a non-zero probability of being seismogenic. Five area sources were defined for distributed seismicity, the least active being the host zone for which the low recurrence rates for earthquakes were substantiated through investigations of historical seismicity. Empirical ground-motion prediction equations were adjusted to a horizon within the bedrock at the site using kappa values inferred from weak-motion analyses. These adjusted models were then scaled to create new equations capturing the range of epistemic uncertainty in this region with no strong motion recordings. Surface motions were obtained by convolving the bedrock motions with site amplification functions calculated using measured shear-wave velocity profiles.
A Bayesian ground-motion model is presented that directly estimates the coefficients of the model and the correlation between different ground-motion parameters of interest. The model is developed as a multi-level model with levels for earthquake, station and record terms. This separation allows to estimate residuals for each level and thus the estimation of the associated aleatory variability. In particular, the usually estimated within-event variability is split into a between-station and between-record variability. In addition, the covariance structure between different ground-motion parameters of interest is estimated for each level, i.e. directly the between-event, between-station and between-record correlation coefficients are available. All parameters of the model are estimated via Bayesian inference, which allows to assess their epistemic uncertainty in a principled way. The model is developed using a recently compiled European strong-motion database. The target variables are peak ground velocity, peak ground acceleration and spectral acceleration at eight oscillator periods. The model performs well with respect to its residuals, and is similar to other ground-motion models using the same underlying database. The correlation coefficients are similar to those estimated for other parts of the world, with nearby periods having a high correlation. The between-station, between-event and between-record correlations follow generally a similar trend.
Empirical ground-motion prediction equations (GMPEs) require adjustment to make them appropriate for site-specific scenarios. However, the process of making such adjustments remains a challenge. This article presents a holistic framework for the development of a response spectral GMPE that is easily adjustable to different seismological conditions and does not suffer from the practical problems associated with adjustments in the response spectral domain. The approach for developing a response spectral GMPE is unique, because it combines the predictions of empirical models for the two model components that characterize the spectral and temporal behavior of the ground motion. Essentially, as described in its initial form by Bora et al. (2014), the approach consists of an empirical model for the Fourier amplitude spectrum (FAS) and a model for the ground-motion duration. These two components are combined within the random vibration theory framework to obtain predictions of response spectral ordinates. In addition, FAS corresponding to individual acceleration records are extrapolated beyond the useable frequencies using the stochastic FAS model, obtained by inversion as described in Edwards and Fah (2013a). To that end, a (oscillator) frequency-dependent duration model, consistent with the empirical FAS model, is also derived. This makes it possible to generate a response spectral model that is easily adjustable to different sets of seismological parameters, such as the stress parameter Delta sigma, quality factor Q, and kappa kappa(0). The dataset used in Bora et al. (2014), a subset of the RESORCE-2012 database, is considered for the present analysis. Based upon the range of the predictor variables in the selected dataset, the present response spectral GMPE should be considered applicable over the magnitude range of 4 <= M-w <= 7.6 at distances <= 200 km.
In probabilistic seismic hazard analysis, different ground-motion prediction equations (GMPEs) are commonly combined within a logic tree framework. The selection of appropriate GMPEs, however, is a non-trivial task, especially for regions where strong motion data are sparse and where no indigenous GMPE exists because the set of models needs to capture the whole range of ground-motion uncertainty. In this study we investigate the aggregation of GMPEs into a mixture model with the aim to infer a backbone model that is able to represent the center of the ground-motion distribution in a logic tree analysis. This central model can be scaled up and down to obtain the full range of ground-motion uncertainty. The combination of models into a mixture is inferred from observed ground-motion data. We tested the new approach for Northern Chile, a region for which no indigenous GMPE exists. Mixture models were calculated for interface and intraslab type events individually. For each source type we aggregated eight subduction zone GMPEs using mainly new strong-motion data that were recorded within the Plate Boundary Observatory Chile project and that were processed within this study. We can show that the mixture performs better than any of its component GMPEs, and that it performs comparable to a regression model that was derived for the same dataset. The mixture model seems to represent the median ground motions in that region fairly well. It is thus able to serve as a backbone model for the logic tree.
Probabilistic seismic-hazard analysis (PSHA) is the current tool of the trade used to estimate the future seismic demands at a site of interest. A modern PSHA represents a complex framework that combines different models with numerous inputs. It is important to understand and assess the impact of these inputs on the model output in a quantitative way. Sensitivity analysis is a valuable tool for quantifying changes of a model output as inputs are perturbed, identifying critical input parameters, and obtaining insight about the model behavior. Differential sensitivity analysis relies on calculating first-order partial derivatives of the model output with respect to its inputs; however, obtaining the derivatives of complex models can be challenging.
In this study, we show how differential sensitivity analysis of a complex framework such as PSHA can be carried out using algorithmic/automatic differentiation (AD). AD has already been successfully applied for sensitivity analyses in various domains such as oceanography and aerodynamics. First, we demonstrate the feasibility of the AD methodology by comparing AD-derived sensitivities with analytically derived sensitivities for a basic case of PSHA using a simple ground-motion prediction equation. Second, we derive sensitivities via AD for a more complex PSHA study using a stochastic simulation approach for the prediction of ground motions. The presented approach is general enough to accommodate more advanced PSHA studies of greater complexity.
Modern natural hazards research requires dealing with several uncertainties that arise from limited process knowledge, measurement errors, censored and incomplete observations, and the intrinsic randomness of the governing processes. Nevertheless, deterministic analyses are still widely used in quantitative hazard assessments despite the pitfall of misestimating the hazard and any ensuing risks.
In this paper we show that Bayesian networks offer a flexible framework for capturing and expressing a broad range of uncertainties encountered in natural hazard assessments. Although Bayesian networks are well studied in theory, their application to real-world data is far from straightforward, and requires specific tailoring and adaptation of existing algorithms. We offer suggestions as how to tackle frequently arising problems in this context and mainly concentrate on the handling of continuous variables, incomplete data sets, and the interaction of both. By way of three case studies from earthquake, flood, and landslide research, we demonstrate the method of data-driven Bayesian network learning, and showcase the flexibility, applicability, and benefits of this approach.
Our results offer fresh and partly counterintuitive insights into well-studied multivariate problems of earthquake-induced ground motion prediction, accurate flood damage quantification, and spatially explicit landslide prediction at the regional scale. In particular, we highlight how Bayesian networks help to express information flow and independence assumptions between candidate predictors. Such knowledge is pivotal in providing scientists and decision makers with well-informed strategies for selecting adequate predictor variables for quantitative natural hazard assessments.
Inferring a ground-motion prediction equation (GMPE) for a region in which only a small number of seismic events has been observed is a challenging task. A response to this data scarcity is to utilise data from other regions in the hope that there exist common patterns in the generation of ground motion that can contribute to the development of a GMPE for the region in question. This is not an unreasonable course of action since we expect regional GMPEs to be related to each other. In this work we model this relatedness by assuming that the regional GMPEs occupy a common low-dimensional manifold in the space of all possible GMPEs. As a consequence, the GMPEs are fitted in a joint manner and not independent of each other, borrowing predictive strength from each other's regional datasets. Experimentation on a real dataset shows that the manifold assumption displays better predictive performance over fitting regional GMPEs independent of each other. (C) 2014 Elsevier Ltd. All rights reserved.
Aleatory variability in ground-motion prediction, represented by the standard deviation (sigma) of a ground-motion prediction equation, exerts a very strong influence on the results of probabilistic seismic-hazard analysis (PSHA). This is especially so at the low annual exceedance frequencies considered for nuclear facilities; in these cases, even small reductions in sigma can have a marked effect on the hazard estimates. Proper separation and quantification of aleatory variability and epistemic uncertainty can lead to defensible reductions in sigma. One such approach is the single-station sigma concept, which removes that part of sigma corresponding to repeatable site-specific effects. However, the site-to-site component must then be constrained by site-specific measurements or else modeled as epistemic uncertainty and incorporated into the modeling of site effects. The practical application of the single-station sigma concept, including the characterization of the dynamic properties of the site and the incorporation of site-response effects into the hazard calculations, is illustrated for a PSHA conducted at a rock site under consideration for the potential construction of a nuclear power plant.
The Ceres earthquake of 29 September 1969 is the largest known earthquake in southern Africa. Digitized analog recordings from Worldwide Standardized Seismographic Network stations (Powell and Fries, 1964) are used to retrieve the point source moment tensor and the most likely centroid depth of the event using full waveform modeling. A scalar seismic moment of 2.2-2.4 x 10(18) N center dot m corresponding to a moment magnitude of 6.2-6.3 is found. The analysis confirms the pure strike-slip mechanism previously determined from onset polarities by Green and Bloch (1971). Overall good agreement with the fault orientation previously estimated from local aftershock recordings is found. The centroid depth can be constrained to be less than 15 km. In a second analysis step, we use a higher order moment tensor based inversion scheme for simple extended rupture models to constrain the lateral fault dimensions. We find rupture propagated unilaterally for 4.7 s from east-southwest to west-northwest for about 17 km ( average rupture velocity of about 3: 1 km/s).
We investigate the usefulness of complex flood damage models for predicting relative damage to residential buildings in a spatial and temporal transfer context. We apply eight different flood damage models to predict relative building damage for five historic flood events in two different regions of Germany. Model complexity is measured in terms of the number of explanatory variables which varies from 1 variable up to 10 variables which are singled out from 28 candidate variables. Model validation is based on empirical damage data, whereas observation uncertainty is taken into consideration. The comparison of model predictive performance shows that additional explanatory variables besides the water depth improve the predictive capability in a spatial and temporal transfer context, i.e., when the models are transferred to different regions and different flood events. Concerning the trade-off between predictive capability and reliability the model structure seem more important than the number of explanatory variables. Among the models considered, the reliability of Bayesian network-based predictions in space-time transfer is larger than for the remaining models, and the uncertainties associated with damage predictions are reflected more completely.