Refine
Year of publication
Document Type
- Article (76)
- Monograph/Edited Volume (3)
- Other (3)
- Conference Proceeding (1)
- Postprint (1)
Keywords
Institute
- Institut für Geowissenschaften (84) (remove)
Slow fourier transform
(2013)
Probabilistic seismic-hazard analysis (PSHA) is the current tool of the trade used to estimate the future seismic demands at a site of interest. A modern PSHA represents a complex framework that combines different models with numerous inputs. It is important to understand and assess the impact of these inputs on the model output in a quantitative way. Sensitivity analysis is a valuable tool for quantifying changes of a model output as inputs are perturbed, identifying critical input parameters, and obtaining insight about the model behavior. Differential sensitivity analysis relies on calculating first-order partial derivatives of the model output with respect to its inputs; however, obtaining the derivatives of complex models can be challenging.
In this study, we show how differential sensitivity analysis of a complex framework such as PSHA can be carried out using algorithmic/automatic differentiation (AD). AD has already been successfully applied for sensitivity analyses in various domains such as oceanography and aerodynamics. First, we demonstrate the feasibility of the AD methodology by comparing AD-derived sensitivities with analytically derived sensitivities for a basic case of PSHA using a simple ground-motion prediction equation. Second, we derive sensitivities via AD for a more complex PSHA study using a stochastic simulation approach for the prediction of ground motions. The presented approach is general enough to accommodate more advanced PSHA studies of greater complexity.
Earthquake rupture length and width estimates are in demand in many seismological applications. Earthquake magnitude estimates are often available, whereas the geometrical extensions of the rupture fault mostly are lacking. Therefore, scaling relations are needed to derive length and width from magnitude. Most frequently used are the relationships of Wells and Coppersmith (1994) derived on the basis of a large dataset including all slip types with the exception of thrust faulting events in subduction environments. However, there are many applications dealing with earthquakes in subduction zones because of their high seismic and tsunamigenic potential. There are no well-established scaling relations for moment magnitude and length/width for subduction events. Within this study, we compiled a large database of source parameter estimates of 283 earthquakes. All focal mechanisms are represented, but special focus is set on (large) subduction zone events, in particular. Scaling relations were fitted with linear least-square as well as orthogonal regression and analyzed regarding the difference between continental and subduction zone/oceanic relationships. Additionally, the effect of technical progress in earthquake parameter estimation on scaling relations was tested as well as the influence of different fault mechanisms. For a given moment magnitude we found shorter but wider rupture areas of thrust events compared to Wells and Coppersmith (1994). The thrust event relationships for pure continental and pure subduction zone rupture areas were found to be almost identical. The scaling relations differ significantly for slip types. The exclusion of events prior to 1964 when the worldwide standard seismic network was established resulted in a remarkable effect on strike-slip scaling relations: the data do not show any saturation of rupture width of strike- slip earthquakes. Generally, rupture area seems to scale with mean slip independent of magnitude. The aspect ratio L/W, however, depends on moment and differs for each slip type.
Tsunami early warning (TEW) is a challenging task as a decision has to be made within few minutes on the basis of incomplete and error-prone data. Deterministic warning systems have difficulties in integrating and quantifying the intrinsic uncertainties. In contrast, probabilistic approaches provide a framework that handles uncertainties in a natural way. Recently, we have proposed a method using Bayesian networks (BNs) that takes into account the uncertainties of seismic source parameter estimates in TEW. In this follow-up study, the method is applied to 10 recent large earthquakes offshore Sumatra and tested for its performance. We have evaluated both the general model performance given the best knowledge we have today about the source parameters of the 10 events and the corresponding response on seismic source information evaluated in real-time. We find that the resulting site-specific warning level probabilities represent well the available tsunami wave measurements and observations. Difficulties occur in the real-time tsunami assessment if the moment magnitude estimate is severely over- or underestimated. In general, the probabilistic analysis reveals a considerably large range of uncertainties in the near-field TEW. By quantifying the uncertainties the BN analysis provides important additional information to a decision maker in a warning centre to deal with the complexity in TEW and to reason under uncertainty.
In low-seismicity regions, such as France or Germany, the estimation of probabilistic seismic hazard must cope with the difficult identification of active faults and with the low amount of seismic data available. Since the probabilistic hazard method was initiated, most studies assume a Poissonian occurrence of earthquakes. Here we propose a method that enables the inclusion of time and space dependences between earthquakes into the probabilistic estimation of hazard. Combining the seismicity model Epidemic Type Aftershocks-Sequence (ETAS) with a Monte Carlo technique, aftershocks are naturally accounted for in the hazard determination. The method is applied to the Pyrenees region in Southern France. The impact on hazard of declustering and of the usual assumption that earthquakes occur according to a Poisson process is quantified, showing that aftershocks contribute on average less than 5 per cent to the probabilistic hazard, with an upper bound around 18 per cent
The use of ground-motion-prediction equations to estimate ground shaking has become a very popular approach for seismic-hazard assessment, especially in the framework of a logic-tree approach. Owing to the large number of existing published ground-motion models, however, the selection and ranking of appropriate models for a particular target area often pose serious practical problems. Here we show how observed around-motion records can help to guide this process in a systematic and comprehensible way. A key element in this context is a new, likelihood based, goodness-of-fit measure that has the property not only to quantify the model fit but also to measure in some degree how well the underlying statistical model assumptions are met. By design, this measure naturally scales between 0 and 1, with a value of 0.5 for a situation in which the model perfectly matches the sample distribution both in terms of mean and standard deviation. We have used it in combination with other goodness-of-fit measures to derive a simple classification scheme to quantify how well a candidate ground-rnotion-prediction equation models a particular set of observed-response spectra. This scheme is demonstrated to perform well in recognizing a number of popular ground-motion models from their rock-site- recording, subsets. This indicates its potential for aiding the assignment of logic-tree weights in a consistent and reproducible way. We have applied our scheme to the border region of France, Germany, and Switzerland where the M-w 4.8 St. Die earthquake of 22 February 2003 in eastern France recently provided a small set of observed-response spectra. These records are best modeled by the ground-motion-prediction equation of Berge-Thierry et al. (2003), which is based on the analysis of predominantly European data. The fact that the Swiss model of Bay et al. (2003) is not able to model the observed records in an acceptable way may indicate general problems arising from the use of weak-motion data for strong-motion prediction
Logic trees are widely used in probabilistic seismic hazard analysis as a tool to capture the epistemic uncertainty associated with the seismogenic sources and the ground-motion prediction models used in estimating the hazard. Combining two or more ground-motion relations within a logic tree will generally require several conversions to be made, because there are several definitions available for both the predicted ground-motion parameters and the explanatory parameters within the predictive ground-motion relations. Procedures for making conversions for each of these factors are presented, using a suite of predictive equations in current use for illustration. The sensitivity of the resulting ground-motion models to these conversions is shown to be pronounced for some of the parameters, especially the measure of source-to-site distance, highlighting the need to take into account any incompatibilities among the selected equations. Procedures are also presented for assigning weights to the branches in the ground-motion section of the logic tree in a transparent fashion, considering both intrinsic merits of the individual equations and their degree of applicability to the particular application
In recent years, H/V measurements have been increasingly used to map the thickness of sediment fill in sedimentary basins in the context of seismic hazard assessment. This parameter is believed to be an important proxy for the site effects in sedimentary basins (e.g. in the Los Angeles basin). Here we present the results of a test using this approach across an active normal fault in a structurally well known situation. Measurements on a 50 km long profile with 1 km station spacing clearly show a change in the frequency of the fundamental peak of H/V ratios with increasing thickness of the sediment layer in the eastern part of the Lower Rhine Embayment. Subsequently, a section of 10 km length across the Erft-Sprung system, a normal fault with ca. 750 m vertical offset, was measured with a station distance of 100 m. Frequencies of the first and second peaks and the first trough in the H/V spectra are used in a simple resonance model to estimate depths of the bedrock. While the frequency of the first peak shows a large scatter for sediment depths larger than ca. 500 m, the frequency of the first trough follows the changing thickness of the sediments across the fault. The lateral resolution is in the range of the station distance of 100 m. A power law for the depth dependence of the S-wave velocity derived from down hole measurements in an earlier study [Budny, 1984] and power laws inverted from dispersion analysis of micro array measurements [Scherbaum et al., 2002] agree with the results from the H/V ratios of this study
One of the key challenges in the context of local site effect studies is the determination of frequencies where the shakeability of the ground is enhanced. In this context, the H/V technique has become increasingly popular and peak frequencies of H/V spectral ratio are sometimes interpreted as resonance frequencies of the transmission response. In the present study, assuming that Rayleigh surface wave is dominant in H/V spectral ratio, we analyse theoretically under which conditions this may be justified and when not. We focus on 'layer over half-space' models which, although seemingly simple, capture many aspects of local site effects in real sedimentary structures. Our starting point is the ellipticity of Rayleigh waves. We use the exact formula of the H/V-ratio presented by Malischewsky & Scherbaum (2004) to investigate the main characteristics of peak and trough frequencies. We present a simple formula illustrating if and where H/V-ratio curves have sharp peaks in dependence of model parameters. In addition, we have constructed a map, which demonstrates the relation between the H/V-peak frequency and the peak frequency of the transmission response in the domain of the layer's Poisson ratio and the impedance contrast. Finally, we have derived maps showing the relationship between the H/V-peak and trough frequency and key parameters of the model such as impedance contrast. These maps are seen as diagnostic tools, which can help to guide the interpretation of H/V spectral ratio diagrams in the context of site effect studies.
The functional form of empirical response spectral ground-motion prediction equations (GMPEs) is often derived using concepts borrowed from Fourier spectral modeling of ground motion. As these GMPEs are subsequently calibrated with empirical observations, this may not appear to pose any major problems in the prediction of ground motion for a particular earthquake scenario. However, the assumption that Fourier spectral concepts persist for response spectra can lead to undesirable consequences when it comes to the adjustment of response spectral GMPEs to represent conditions not covered in the original empirical data set. In this context, a couple of important questions arise, for example, what are the distinctions and/or similarities between Fourier and response spectra of ground motions? And, if they are different, then what is the mechanism responsible for such differences and how do adjustments that are made to Fourier amplitude spectrum (FAS) manifest in response spectra? The present article explores the relationship between the Fourier and response spectrum of ground motion by using random vibration theory (RVT). With a simple Brune (1970, 1971) source model, RVT-generated acceleration spectra for a fixed magnitude and distance scenario are used. The RVT analyses reveal that the scaling of low oscillator-frequency response spectral ordinates can be treated as being equivalent to the scaling of the corresponding Fourier spectral ordinates. However, the high oscillator-frequency response spectral ordinates are controlled by a rather wide band of Fourier spectral ordinates. In fact, the peak ground acceleration, counter to the popular perception that it is a reflection of the high-frequency characteristics of ground motion, is controlled by the entire Fourier spectrum of ground motion. Additionally, this article demonstrates how an adjustment made to FAS is similar or different to the same adjustment made to response spectral ordinates. For this purpose, two cases: adjustments to the stress parameter (Delta sigma) (source term), and adjustments to the attributes reflecting site response (V-S - kappa(0)) are considered.
One of the major challenges in engineering seismology is the reliable prediction of site-specific ground motion for particular earthquakes, observed at specific distances. For larger events, a special problem arises, at short distances, with the source-to-site distance measure, because distance metrics based on a point-source model are no longer appropriate. As a consequence, different attenuation relations differ in the distance metric that they use. In addition to being a source of confusion, this causes problems to quantitatively compare or combine different ground- motion models; for example, in the context of Probabilistic Seismic Hazard Assessment, in cases where ground-motion models with different distance metrics occupy neighboring branches of a logic tree. In such a situation, very crude assumptions about source sizes and orientations often have to be used to be able to derive an estimate of the particular metric required. Even if this solves the problem of providing a number to put into the attenuation relation, a serious problem remains. When converting distance measures, the corresponding uncertainties map onto the estimated ground motions according to the laws of error propagation. To make matters worse, conversion of distance metrics can cause the uncertainties of the adapted ground-motion model to become magnitude and distance dependent, even if they are not in the original relation. To be able to treat this problem quantitatively, the variability increase caused by the distance metric conversion has to be quantified. For this purpose, we have used well established scaling laws to determine explicit distance conversion relations using regression analysis on simulated data. We demonstrate that, for all practical purposes, most popular distance metrics can be related to the Joyner-Boore distance using models based on gamma distributions to express the shape of some "residual function." The functional forms are magnitude and distance dependent and are expressed as polynomials. We compare the performance of these relations with manually derived individual distance estimates for the Landers, the Imperial Valley, and the Chi-Chi earthquakes
Records from ocean bottom seismometers (OBSs) are highly contaminated by noise, which is much stronger
compared to data from most land stations, especially on the horizontal components. As a consequence, the high energy of the oceanic noise at frequencies below 1 Hz considerably complicates the analysis of the teleseismic earthquake signals recorded by OBSs.
Previous studies suggested different approaches to remove low-frequency noises from OBS recordings but mainly focused on the vertical component. The records of horizontal components, which are crucial for the application of many methods in passive seismological analysis of body and surface waves, could not be much improved in the teleseismic frequency band. Here we introduce a noise reduction method, which is derived from the harmonic–percussive separation algorithms used in Zali et al. (2021), in order to separate long-lasting narrowband signals from broadband transients in the OBS signal. This leads to significant noise reduction of OBS records on both the vertical and horizontal components and increases the earthquake signal-to-noise ratio (SNR) without distortion of the broadband earthquake waveforms. This is demonstrated through tests with synthetic data. Both SNR and cross-correlation coefficients showed significant improvements for different realistic noise realizations. The application of denoised signals in surface wave analysis and receiver functions is discussed through tests with synthetic and real data.
Records from ocean bottom seismometers (OBSs) are highly contaminated by noise, which is much stronger compared to data from most land stations, especially on the horizontal components. As a consequence, the high energy of the oceanic noise at frequencies below 1 Hz considerably complicates the analysis of the teleseismic earthquake signals recorded by OBSs.
Previous studies suggested different approaches to remove low-frequency noises from OBS recordings but mainly focused on the vertical component. The records of horizontal components, which are crucial for the application of many methods in passive seismological analysis of body and surface waves, could not be much improved in the teleseismic frequency band. Here we introduce a noise reduction method, which is derived from the harmonic–percussive separation algorithms used in Zali et al. (2021), in order to separate long-lasting narrowband signals from broadband transients in the OBS signal. This leads to significant noise reduction of OBS records on both the vertical and horizontal components and increases the earthquake signal-to-noise ratio (SNR) without distortion of the broadband earthquake waveforms. This is demonstrated through tests with synthetic data. Both SNR and cross-correlation coefficients showed significant improvements for different realistic noise realizations. The application of denoised signals in surface wave analysis and receiver functions is discussed through tests with synthetic and real data.
Adjustment of median ground motion prediction equations (GMPEs) from one region to another region is one of the major challenges within the current practice of seismic hazard analysis. In our approach of generating response spectra, we derive two separate empirical models for a) Fourier amplitude spectrum (FAS) and b) duration of ground motion. To calculate response spectra, the two models are combined within the random vibration theory (RVT) framework. The models are calibrated on recordings obtained from shallow crustal earthquakes in active tectonic regions. We use a subset of NGA-West2 database with M3.2-7.9 earthquakes at distances 0-300 km. The NGA-West2 database expanded over a wide magnitude and distance range facilitates a better constraint over derived models. A frequency-dependent duration model is derived to obtain adjustable response spectral ordinates. Excellent comparison of our approach with other NGA-West2 models implies that it can also be used as a stand-alone model.
The most recent intense earthquake swarm in West Bohemia lasted from 6 October 2008 to January 2009. Starting 12 days after the onset, the University of Potsdam monitored the swarm by a temporary small-aperture seismic array at 10 km epicentral distance. The purpose of the installation was a complete monitoring of the swarm including micro-earthquakes (M (L) < 0). We identify earthquakes using a conventional short-term average/long-term average trigger combined with sliding-window frequency-wavenumber and polarisation analyses. The resulting earthquake catalogue consists of 14,530 earthquakes between 19 October 2008 and 18 March 2009 with magnitudes in the range of -aEuro parts per thousand 1.2 a parts per thousand currency signaEuro parts per thousand M (L) a parts per thousand currency signaEuro parts per thousand 2.7. The small-aperture seismic array substantially lowers the detection threshold to about M (c) = -aEuro parts per thousand 0.4, when compared to the regional networks operating in West Bohemia (M (c) > 0.0). In the course of this work, the main temporal features (frequency-magnitude distribution, propagation of back azimuth and horizontal slowness, occurrence rate of aftershock sequences and interevent-time distribution) of the recent 2008/2009 earthquake swarm are presented and discussed. Temporal changes of the coefficient of variation (based on interevent times) suggest that the swarm earthquake activity of the 2008/2009 swarm terminates by 12 January 2009. During the main phase in our studied swarm period after 19 October, the b value of the Gutenberg-Richter relation decreases from 1.2 to 0.8. This trend is also reflected in the power-law behavior of the seismic moment release. The corresponding total seismic moment release of 1.02x10(17) Nm is equivalent to M (L,max) = 5.4.
An important task of seismic hazard assessment consists of estimating the rate of seismic moment release which is correlated to the rate of tectonic deformation and the seismic coupling. However, the estimations of deformation depend on the type of information utilized (e.g. geodetic, geological, seismic) and include large uncertainties. We therefore estimate the deformation rate in the Lower Rhine Embayment (LRE), Germany, using an integrated approach where the uncertainties have been systematically incorporated. On the basis of a new homogeneous earthquake catalogue we initially determine the frequency-magnitude distribution by statistical methods. In particular, we focus on an adequate estimation of the upper bound of the Gutenberg-Richter relation and demonstrate the importance of additional palaeoseis- mological information. The integration of seismological and geological information yields a probability distribution of the upper bound magnitude. Using this distribution together with the distribution of Gutenberg-Richter a and b values, we perform Monte Carlo simulations to derive the seismic moment release as a function of the observation time. The seismic moment release estimated from synthetic earthquake catalogues with short catalogue length is found to systematically underestimate the long-term moment rate which can be analytically determined. The moment release recorded in the LRE over the last 250 yr is found to be in good agreement with the probability distribution resulting from the Monte Carlo simulations. Furthermore, the long-term distribution is within its uncertainties consistent with the moment rate derived by geological measurements, indicating an almost complete seismic coupling in this region. By means of Kostrov's formula, we additionally calculate the full deformation rate tensor using the distribution of known focal mechanisms in LRE. Finally, we use the same approach to calculate the seismic moment and the deformation rate for two subsets of the catalogue corresponding to the east- and west-dipping faults, respectively
Bayesian networks are a powerful and increasingly popular tool for reasoning under uncertainty, offering intuitive insight into (probabilistic) data-generating processes. They have been successfully applied to many different fields, including bioinformatics. In this paper, Bayesian networks are used to model the joint-probability distribution of selected earthquake, site, and ground-motion parameters. This provides a probabilistic representation of the independencies and dependencies between these variables. In particular, contrary to classical regression, Bayesian networks do not distinguish between target and predictors, treating each variable as random variable. The capability of Bayesian networks to model the ground-motion domain in probabilistic seismic hazard analysis is shown for a generic situation. A Bayesian network is learned based on a subset of the Next Generation Attenuation (NGA) dataset, using 3342 records from 154 earthquakes. Because no prior assumptions about dependencies between particular parameters are made, the learned network displays the most probable model given the data. The learned network shows that the ground-motion parameter (horizontal peak ground acceleration, PGA) is directly connected only to the moment magnitude, Joyner-Boore distance, fault mechanism, source-to-site azimuth, and depth to a shear-wave horizon of 2: 5 km/s (Z2.5). In particular, the effect of V-S30 is mediated by Z2.5. Comparisons of the PGA distributions based on the Bayesian networks with the NGA model of Boore and Atkinson (2008) show a reasonable agreement in ranges of good data coverage.
In the estimate of dispersion with the help of wavelet analysis considerable emphasis has been put on the extraction of the group velocity using the modulus of the wavelet transform. In this paper we give an asymptotic expression of the full propagator in wavelet space that comprises the phase velocity as well. This operator establishes a relationship between the observed signals at two different stations during wave propagation in a dispersive and attenuating medium. Numerical and experimental examples are presented to show that the method accurately models seismic wave dispersion and attenuation