Refine
Year of publication
Document Type
- Article (76)
- Monograph/Edited Volume (3)
- Other (3)
- Conference Proceeding (1)
- Postprint (1)
Language
- English (84) (remove)
Keywords
Institute
- Institut für Geowissenschaften (84) (remove)
Enhancing the resolution and accuracy of surface ground-penetrating radar (GPR) reflection data by inverse filtering to recover a zero-phased band-limited reflectivity image requires a deconvolution technique that takes the mixed-phase character of the embedded wavelet into account. In contrast, standard stochastic deconvolution techniques assume that the wavelet is minimum phase and, hence, often meet with limited success when applied to GPR data. We present a new general-purpose blind deconvolution algorithm for mixed-phase wavelet estimation and deconvolution that (1) uses the parametrization of a mixed-phase wavelet as the convolution of the wavelet's minimum-phase equivalent with a dispersive all-pass filter, (2) includes prior information about the wavelet to be estimated in a Bayesian framework, and (3) relies on the assumption of a sparse reflectivity. Solving the normal equations using the data autocorrelation function provides an inverse filter that optimally removes the minimum-phase equivalent of the wavelet from the data, which leaves traces with a balanced amplitude spectrum but distorted phase. To compensate for the remaining phase errors, we invert in the frequency domain for an all-pass filter thereby taking advantage of the fact that the action of the all-pass filter is exclusively contained in its phase spectrum. A key element of our algorithm and a novelty in blind deconvolution is the inclusion of prior information that allows resolving ambiguities in polarity and timing that cannot be resolved using the sparseness measure alone. We employ a global inversion approach for non-linear optimization to find the all-pass filter phase values for each signal frequency. We tested the robustness and reliability of our algorithm on synthetic data with different wavelets, 1-D reflectivity models of different complexity, varying levels of added noise, and different types of prior information. When applied to realistic synthetic 2-D data and 2-D field data, we obtain images with increased temporal resolution compared to the results of standard processing.
An important task of seismic hazard assessment consists of estimating the rate of seismic moment release which is correlated to the rate of tectonic deformation and the seismic coupling. However, the estimations of deformation depend on the type of information utilized (e.g. geodetic, geological, seismic) and include large uncertainties. We therefore estimate the deformation rate in the Lower Rhine Embayment (LRE), Germany, using an integrated approach where the uncertainties have been systematically incorporated. On the basis of a new homogeneous earthquake catalogue we initially determine the frequency-magnitude distribution by statistical methods. In particular, we focus on an adequate estimation of the upper bound of the Gutenberg-Richter relation and demonstrate the importance of additional palaeoseis- mological information. The integration of seismological and geological information yields a probability distribution of the upper bound magnitude. Using this distribution together with the distribution of Gutenberg-Richter a and b values, we perform Monte Carlo simulations to derive the seismic moment release as a function of the observation time. The seismic moment release estimated from synthetic earthquake catalogues with short catalogue length is found to systematically underestimate the long-term moment rate which can be analytically determined. The moment release recorded in the LRE over the last 250 yr is found to be in good agreement with the probability distribution resulting from the Monte Carlo simulations. Furthermore, the long-term distribution is within its uncertainties consistent with the moment rate derived by geological measurements, indicating an almost complete seismic coupling in this region. By means of Kostrov's formula, we additionally calculate the full deformation rate tensor using the distribution of known focal mechanisms in LRE. Finally, we use the same approach to calculate the seismic moment and the deformation rate for two subsets of the catalogue corresponding to the east- and west-dipping faults, respectively
Digital seismology tutor
(2001)
One of the major challenges in engineering seismology is the reliable prediction of site-specific ground motion for particular earthquakes, observed at specific distances. For larger events, a special problem arises, at short distances, with the source-to-site distance measure, because distance metrics based on a point-source model are no longer appropriate. As a consequence, different attenuation relations differ in the distance metric that they use. In addition to being a source of confusion, this causes problems to quantitatively compare or combine different ground- motion models; for example, in the context of Probabilistic Seismic Hazard Assessment, in cases where ground-motion models with different distance metrics occupy neighboring branches of a logic tree. In such a situation, very crude assumptions about source sizes and orientations often have to be used to be able to derive an estimate of the particular metric required. Even if this solves the problem of providing a number to put into the attenuation relation, a serious problem remains. When converting distance measures, the corresponding uncertainties map onto the estimated ground motions according to the laws of error propagation. To make matters worse, conversion of distance metrics can cause the uncertainties of the adapted ground-motion model to become magnitude and distance dependent, even if they are not in the original relation. To be able to treat this problem quantitatively, the variability increase caused by the distance metric conversion has to be quantified. For this purpose, we have used well established scaling laws to determine explicit distance conversion relations using regression analysis on simulated data. We demonstrate that, for all practical purposes, most popular distance metrics can be related to the Joyner-Boore distance using models based on gamma distributions to express the shape of some "residual function." The functional forms are magnitude and distance dependent and are expressed as polynomials. We compare the performance of these relations with manually derived individual distance estimates for the Landers, the Imperial Valley, and the Chi-Chi earthquakes
Tuning systems of traditional Georgian singing determined from a new corpus of field recordings
(2022)
In this study we examine the tonal organization of the 2016 GVM dataset, a newly-created corpus of high-quality multimedia field recordings of traditional Georgian singing with a focus on Svaneti. For this purpose, we developed a new processing pipeline for the computational analysis of non-western polyphonic music which was subsequently applied to the complete 2016 GVM dataset.
To evaluate under what conditions a single tuning system is representative of current Svan performance practice, we examined the stability of the obtained tuning systems from an ensemble-, a song-, and a corpus-related perspective.
Furthermore, we compared the resulting Svan tuning systems with the tuning systems obtained for the Erkomaishvili dataset (Rosenzweig et al., 2020) in the study by Scherbaum et al. (2020). In comparison to a 12-TET (12-tone-equal-temperament) system, the Erkomaishvili and the Svan tuning systems are surprisingly similar.
Both systems show a strong presence of pure fourths (500 cents) and fifths (700 cents), and 'neutral' thirds (peaking around 350 cents) as well as 'neutral' sixths.
In addition, the sizes of the melodic and the harmonic seconds in both tuning systems differ systematically from each other, with the size of the harmonic second being systematically larger than the melodic one.
In this study we examine the tonal organization of a series of recordings of liturgical chants, sung in 1966 by the Georgian master singer Artem Erkomaishvili. This dataset is the oldest corpus of Georgian chants from which the time synchronous F0-trajectories for all three voices have been reliably determined (Müller et al. 2017). It is therefore of outstanding importance for the understanding of the tuning principles of traditional Georgian vocal music.
The aim of the present study is to use various computational methods to analyze what these recordings can contribute to the ongoing scientific dispute about traditional Georgian tuning systems. Starting point for the present analysis is the re-release of the original audio data together with estimated fundamental frequency (F0) trajectories for each of the three voices, beat annotations, and digital scores (Rosenzweig et al. 2020). We present synoptic models for the pitch and the harmonic interval distributions, which are the first of such models for which the complete Erkomaishvili dataset was used. We show that these distributions can be very compactly be expressed as Gaussian mixture models, anchored on discrete sets of pitch or interval values for the pitch and interval distributions, respectively. As part of our study we demonstrate that these pitch values, which we refer to as scale pitches, and which are determined as the mean values of the Gaussian mixture elements, define the scale degrees of the melodic sound scales which build the skeleton of Artem Erkomaishvili’s intonation. The observation of consistent pitch bending of notes in melodic phrases, which appear in identical form in a group of chants, as well as the observation of harmonically driven intonation adjustments, which are clearly documented for all pure harmonic intervals, demonstrate that Artem Erkomaishvili intentionally deviates from the scale pitch skeleton quite freely. As a central result of our study, we proof that this melodic freedom is always constrained by the attracting influence of the scale pitches. Deviations of the F0-values of individual note events from the scale pitches at one instance of time are compensated for in the subsequent melodic steps. This suggests a deviation-compensation mechanism at the core of Artem Erkomaishvili’s melody generation, which clearly honors the scales but still allows for a large degree of melodic flexibility. This model, which summarizes all partial aspects of our analysis, is consistent with the melodic scale models derived from the observed pitch distributions, as well as with the melodic and harmonic interval distributions. In addition to the tangible results of our work, we believe that our work has general implications for the determination of tuning models from audio data, in particular for non-tempered music.
Logic trees have become the most popular tool for the quantification of epistemic uncertainties in probabilistic seismic hazard assessment (PSHA). In a logic-tree framework, epistemic uncertainty is expressed in a set of branch weights, by which an expert or an expert group assigns degree-of-belief values to the applicability of the corresponding branch models. Despite the popularity of logic-trees, however, one finds surprisingly few clear commitments to what logic-tree branch weights are assumed to be (even by hazard analysts designing logic trees). In the present paper we argue that it is important for hazard analysts to accept the probabilistic framework from the beginning for assigning logic-tree branch weights. In other words, to accept that logic-tree branch weights are probabilities in the axiomatic sense, independent of one's preference for the philosophical interpretation of probabilities. We demonstrate that interpreting logic-tree branch weights merely as a numerical measure of "model quality," which are then subsequently normalized to sum up to unity, will with increasing number of models inevitably lead to an apparent insensitivity of hazard curves on the logic-tree branch weights, which may even be mistaken for robustness of the results. Finally, we argue that assigning logic-tree branch weights in a sequential fashion may improve their logical consistency.
Although the methodological framework of probabilistic seismic hazard analysis is well established, the selection of models to predict the ground motion at the sites of interest remains a major challenge. Information theory provides a powerful theoretical framework that can guide this selection process in a consistent way. From an information- theoretic perspective, the appropriateness of models can be expressed in terms of their relative information loss (Kullback-Leibler distance) and hence in physically meaningful units (bits). In contrast to hypothesis testing, information-theoretic model selection does not require ad hoc decisions regarding significance levels nor does it require the models to be mutually exclusive and collectively exhaustive. The key ingredient, the Kullback-Leibler distance, can be estimated from the statistical expectation of log-likelihoods of observations for the models under consideration. In the present study, data-driven ground-motion model selection based on Kullback-Leibler-distance differences is illustrated for a set of simulated observations of response spectra and macroseismic intensities. Information theory allows for a unified treatment of both quantities. The application of Kullback-Leibler-distance based model selection to real data using the model generating data set for the Abrahamson and Silva (1997) ground-motion model demonstrates the superior performance of the information-theoretic perspective in comparison to earlier attempts at data- driven model selection (e.g., Scherbaum et al., 2004).
The estimation of minimum-misfit stochastic models from empirical ground-motion prediction equations
(2006)
In areas of moderate to low seismic activity there is commonly a lack of recorded strong ground motion. As a consequence, the prediction of ground motion expected for hypothetical future earthquakes is often performed by employing empirical models from other regions. In this context, Campbell's hybrid empirical approach (Campbell, 2003, 2004) provides a methodological framework to adapt ground-motion prediction equations to arbitrary target regions by using response spectral host-to-target-region-conversion filters. For this purpose, the empirical ground-motion prediction equation has to be quantified in terms of a stochastic model. The problem we address here is how to do this in a systematic way and how to assess the corresponding uncertainties. For the determination of the model parameters we use a genetic algorithm search. The stochastic model spectra were calculated by using a speed-optimized version of SMSIM (Boore, 2000). For most of the empirical ground-motion models, we obtain sets of stochastic models that match the empirical models within the full magnitude and distance ranges of their generating data sets fairly well. The overall quality of fit and the resulting model parameter sets strongly depend on the particular choice of the distance metric used for the stochastic model. We suggest the use of the hypocentral distance metric for the stochastic Simulation of strong ground motion because it provides the lowest-misfit stochastic models for most empirical equations. This is in agreement with the results of two recent studies of hypocenter locations in finite-source models which indicate that hypocenters are often located close to regions of large slip (Mai et al., 2005; Manighetti et al., 2005). Because essentially all empirical ground-motion prediction equations contain data from different geographical regions, the model parameters corresponding to the lowest-misfit stochastic models cannot necessarily be expected to represent single, physically realizable host regions but to model the generating data sets in an average way. In addition, the differences between the lowest-misfit stochastic models and the empirical ground-motion prediction equation are strongly distance, magnitude, and frequency dependent, which, according to the laws of uncertainty propagation, will increase the variance of the corresponding hybrid empirical model predictions (Scherbaum et al., 2005). As a consequence, the selection of empirical ground-motion models for host-to-target-region conversions requires considerable judgment of the ground-motion analyst
The use of ground-motion-prediction equations to estimate ground shaking has become a very popular approach for seismic-hazard assessment, especially in the framework of a logic-tree approach. Owing to the large number of existing published ground-motion models, however, the selection and ranking of appropriate models for a particular target area often pose serious practical problems. Here we show how observed around-motion records can help to guide this process in a systematic and comprehensible way. A key element in this context is a new, likelihood based, goodness-of-fit measure that has the property not only to quantify the model fit but also to measure in some degree how well the underlying statistical model assumptions are met. By design, this measure naturally scales between 0 and 1, with a value of 0.5 for a situation in which the model perfectly matches the sample distribution both in terms of mean and standard deviation. We have used it in combination with other goodness-of-fit measures to derive a simple classification scheme to quantify how well a candidate ground-rnotion-prediction equation models a particular set of observed-response spectra. This scheme is demonstrated to perform well in recognizing a number of popular ground-motion models from their rock-site- recording, subsets. This indicates its potential for aiding the assignment of logic-tree weights in a consistent and reproducible way. We have applied our scheme to the border region of France, Germany, and Switzerland where the M-w 4.8 St. Die earthquake of 22 February 2003 in eastern France recently provided a small set of observed-response spectra. These records are best modeled by the ground-motion-prediction equation of Berge-Thierry et al. (2003), which is based on the analysis of predominantly European data. The fact that the Swiss model of Bay et al. (2003) is not able to model the observed records in an acceptable way may indicate general problems arising from the use of weak-motion data for strong-motion prediction
Composite ground-motion models and logic trees: Methodology, sensitivities, and uncertainties
(2005)
Logic trees have become a popular tool in seismic hazard studies. Commonly, the models corresponding to the end branches of the complete logic tree in a probabalistic seismic hazard analysis (PSHA) are treated separately until the final calculation of the set of hazard curves. This comes at the price that information regarding sensitivities and uncertainties in the ground-motion sections of the logic tree are only obtainable after disaggregation. Furthermore, from this end-branch model perspective even the designers of the logic tree cannot directly tell what ground-motion scenarios most likely would result from their logic trees for a given earthquake at a particular distance, nor how uncertain these scenarios might be or how they would be affected by the choices of the hazard analyst. On the other hand, all this information is already implicitly present in the logic tree. Therefore, with the ground-motion perspective that we propose in the present article, we treat the ground-motion sections of a complete logic tree for seismic hazard as a single composite model representing the complete state-of-knowledge-and-belief of a particular analyst on ground motion in a particular target region. We implement this view by resampling the ground-motion models represented in the ground-motion sections of the logic tree by Monte Carlo simulation (separately for the median values and the sigma values) and then recombining the sets of simulated values in proportion to their logic-tree branch weights. The quantiles of this resampled composite model provide the hazard analyst and the decision maker with a simple, clear, and quantitative representation of the overall physical meaning of the ground-motion section of a logic tree and the accompanying epistemic uncertainty. Quantiles of the composite model also provide an easy way to analyze the sensitivities and uncertainties related to a given logic-tree model. We illustrate this for a composite ground- motion model for central Europe. Further potential fields of applications are seen wherever individual best estimates of ground motion have to be derived from a set of candidate models, for example, for hazard rnaps, sensitivity studies, or for modeling scenario earthquakes
The most recent intense earthquake swarm in the Vogtland lasted from 6 October 2008 until January 2009. Greatest magnitudes exceeded M3.5 several times in October making it the greatest swarm since 1985/86. In contrast to the swarms in 1985 and 2000, seismic moment release was concentrated near swarm onset. Focal area and temporal evolution are similar to the swarm in 2000. Work hypothysis: uprising upper-mantle fluids trigger swarm earthquakes at low stress level. To monitor the seismicity, the University of Potsdam operated a small aperture seismic array at 10 km epicentral distance between 18 October 2008 and 18 March 2009. Consisting of 12 seismic stations and 3 additional microphones, the array is capable of detecting earthquakes from larger to very low magnitudes (M<-1) as well as associated air waves. We use array techniques to determine properties of the incoming wavefield: noise, direct P and S waves, and converted phases.
In probabilistic seismic-hazard analysis, epistemic uncertainties are commonly treated within a logic-tree framework in which the branch weights express the degree of belief of an expert in a set of models. For the calculation of the distribution of hazard curves, these branch weights represent subjective probabilities. A major challenge for experts is to provide logically consistent weight estimates (in the sense of Kolmogorovs axioms), to be aware of the multitude of heuristics, and to minimize the biases which affect human judgment under uncertainty. We introduce a platform-independent, interactive program enabling us to quantify, elicit, and transfer expert knowledge into a set of subjective probabilities by applying experimental design theory, following the approach of Curtis and Wood (2004). Instead of determining the set of probabilities for all models in a single step, the computer-driven elicitation process is performed as a sequence of evaluations of relative weights for small subsets of models. From these, the probabilities for the whole model set are determined as a solution of an optimization problem. The result of this process is a set of logically consistent probabilities together with a measure of confidence determined from the amount of conflicting information which is provided by the expert during the relative weighting process. We experiment with different scenarios simulating likely expert behaviors in the context of knowledge elicitation and show the impact this has on the results. The overall aim is to provide a smart elicitation technique, and our findings serve as a guide for practical applications.
Aleatory variability in ground-motion prediction, represented by the standard deviation (sigma) of a ground-motion prediction equation, exerts a very strong influence on the results of probabilistic seismic-hazard analysis (PSHA). This is especially so at the low annual exceedance frequencies considered for nuclear facilities; in these cases, even small reductions in sigma can have a marked effect on the hazard estimates. Proper separation and quantification of aleatory variability and epistemic uncertainty can lead to defensible reductions in sigma. One such approach is the single-station sigma concept, which removes that part of sigma corresponding to repeatable site-specific effects. However, the site-to-site component must then be constrained by site-specific measurements or else modeled as epistemic uncertainty and incorporated into the modeling of site effects. The practical application of the single-station sigma concept, including the characterization of the dynamic properties of the site and the incorporation of site-response effects into the hazard calculations, is illustrated for a PSHA conducted at a rock site under consideration for the potential construction of a nuclear power plant.