Refine
Year of publication
Document Type
- Article (77)
- Monograph/Edited Volume (3)
- Other (3)
- Conference Proceeding (1)
- Postprint (1)
Keywords
Institute
- Institut für Geowissenschaften (85) (remove)
Earthquake rupture length and width estimates are in demand in many seismological applications. Earthquake magnitude estimates are often available, whereas the geometrical extensions of the rupture fault mostly are lacking. Therefore, scaling relations are needed to derive length and width from magnitude. Most frequently used are the relationships of Wells and Coppersmith (1994) derived on the basis of a large dataset including all slip types with the exception of thrust faulting events in subduction environments. However, there are many applications dealing with earthquakes in subduction zones because of their high seismic and tsunamigenic potential. There are no well-established scaling relations for moment magnitude and length/width for subduction events. Within this study, we compiled a large database of source parameter estimates of 283 earthquakes. All focal mechanisms are represented, but special focus is set on (large) subduction zone events, in particular. Scaling relations were fitted with linear least-square as well as orthogonal regression and analyzed regarding the difference between continental and subduction zone/oceanic relationships. Additionally, the effect of technical progress in earthquake parameter estimation on scaling relations was tested as well as the influence of different fault mechanisms. For a given moment magnitude we found shorter but wider rupture areas of thrust events compared to Wells and Coppersmith (1994). The thrust event relationships for pure continental and pure subduction zone rupture areas were found to be almost identical. The scaling relations differ significantly for slip types. The exclusion of events prior to 1964 when the worldwide standard seismic network was established resulted in a remarkable effect on strike-slip scaling relations: the data do not show any saturation of rupture width of strike- slip earthquakes. Generally, rupture area seems to scale with mean slip independent of magnitude. The aspect ratio L/W, however, depends on moment and differs for each slip type.
Although the methodological framework of probabilistic seismic hazard analysis is well established, the selection of models to predict the ground motion at the sites of interest remains a major challenge. Information theory provides a powerful theoretical framework that can guide this selection process in a consistent way. From an information- theoretic perspective, the appropriateness of models can be expressed in terms of their relative information loss (Kullback-Leibler distance) and hence in physically meaningful units (bits). In contrast to hypothesis testing, information-theoretic model selection does not require ad hoc decisions regarding significance levels nor does it require the models to be mutually exclusive and collectively exhaustive. The key ingredient, the Kullback-Leibler distance, can be estimated from the statistical expectation of log-likelihoods of observations for the models under consideration. In the present study, data-driven ground-motion model selection based on Kullback-Leibler-distance differences is illustrated for a set of simulated observations of response spectra and macroseismic intensities. Information theory allows for a unified treatment of both quantities. The application of Kullback-Leibler-distance based model selection to real data using the model generating data set for the Abrahamson and Silva (1997) ground-motion model demonstrates the superior performance of the information-theoretic perspective in comparison to earlier attempts at data- driven model selection (e.g., Scherbaum et al., 2004).
Empirical ground-motion models used in seismic hazard analysis are commonly derived by regression of observed ground motions against a chosen set of predictor variables. Commonly, the model building process is based on residual analysis and/or expert knowledge and/or opinion, while the quality of the model is assessed by the goodness-of-fit to the data. Such an approach, however, bears no immediate relation to the predictive power of the model and with increasing complexity of the models is increasingly susceptible to the danger of overfitting. Here, a different, primarily data-driven method for the development of ground-motion models is proposed that makes use of the notion of generalization error to counteract the problem of overfitting. Generalization error directly estimates the average prediction error on data not used for the model generation and, thus, is a good criterion to assess the predictive capabilities of a model. The approach taken here makes only few a priori assumptions. At first, peak ground acceleration and response spectrum values are modeled by flexible, nonphysical functions (polynomials) of the predictor variables. The inclusion of a particular predictor and the order of the polynomials are based on minimizing generalization error. The approach is illustrated for the next generation of ground-motion attenuation dataset. The resulting model is rather complex, comprising 48 parameters, but has considerably lower generalization error than functional forms commonly used in ground-motion models. The model parameters have no physical meaning, but a visual interpretation is possible and can reveal relevant characteristics of the data, for example, the Moho bounce in the distance scaling. In a second step, the regression model is approximated by an equivalent stochastic model, making it physically interpretable. The resulting resolvable stochastic model parameters are comparable to published models for western North America. In general, for large datasets generalization error minimization provides a viable method for the development of empirical ground-motion models.
This study presents an unsupervised feature selection and learning approach for the discovery and intuitive imaging of significant temporal patterns in seismic single-station or network recordings. For this purpose, the data are parametrized by real-valued feature vectors for short time windows using standard analysis tools for seismic data, such as frequency-wavenumber, polarization, and spectral analysis. We use Self-Organizing Maps (SOMs) for a data-driven feature selection, visualization and clustering procedure, which is in particular suitable for high-dimensional data sets. Our feature selection method is based on significance testing using the Wald-Wolfowitz runs test for-individual features and on correlation hunting with SOMs in feature subsets. Using synthetics composed of Rayleigh and Love waves and real-world data, we show the robustness and the improved discriminative power of that approach compared to feature subsets manually selected from individual wavefield parametrization methods. Furthermore, the capability of the clustering and visualization techniques to investigate the discrimination of wave phases is shown by means of synthetic waveforms and regional earthquake recordings.
The aim of this paper is to characterize the spatio-temporal distribution of Central-Europe seismicity. Specifically, by using a non-parametric statistical approach, the proportional hazard model, leading to an empirical estimation of the hazard function, we provide some constrains on the time behavior of earthquake generation mechanisms. The results indicate that the most conspicuous characteristics of M-w 4.0+ earthquakes is a temporal clustering lasting a couple of years. This suggests that the probability of occurrence increases immediately after a previous event. After a few years, the process becomes almost time independent. Furthermore, we investigate the cluster properties of the seismicity of Central-Europe, by comparing the obtained result with the one of synthetic catalogs generated by the epidemic type aftershock sequences (ETAS) model, which previously have been successfully applied for short term clustering. Our results indicate that the ETAS is not well suited to describe the seismicity as a whole, while it is able to capture the features of the short- term behaviour. Remarkably, similar results have been previously found for Italy using a higher magnitude threshold.
Considering the increasing number and complexity of ground-motion prediction equations available for seismic hazard assessment, there is a definite need for an efficient, quantitative, and robust method to select and rank these models for a particular region of interest. In a recent article, Scherbaum et al. (2009) have suggested an information- theoretic approach for this purpose that overcomes several shortcomings of earlier attempts at using data-driven ground- motion prediction equation selection procedures. The results of their theoretical study provides evidence that in addition to observed response spectra, macroseismic intensity data might be useful for model selection and ranking. We present here an applicability study for this approach using response spectra and macroseismic intensities from eight Californian earthquakes. A total of 17 ground-motion prediction equations, from different regions, for response spectra, combined with the equation of Atkinson and Kaka (2007) for macroseismic intensities are tested for their relative performance. The resulting data-driven rankings show that the models that best estimate ground motion in California are, as one would expect, Californian and western U. S. models, while some European models also perform fairly well. Moreover, the model performance appears to be strongly dependent on both distance and frequency. The relative information of intensity versus response spectral data is also explored. The strong correlation we obtain between intensity-based rankings and spectral-based ones demonstrates the great potential of macroseismic intensities data for model selection in the context of seismic hazard assessment.
Digital seismology tutor
(2001)
The use of ground-motion-prediction equations to estimate ground shaking has become a very popular approach for seismic-hazard assessment, especially in the framework of a logic-tree approach. Owing to the large number of existing published ground-motion models, however, the selection and ranking of appropriate models for a particular target area often pose serious practical problems. Here we show how observed around-motion records can help to guide this process in a systematic and comprehensible way. A key element in this context is a new, likelihood based, goodness-of-fit measure that has the property not only to quantify the model fit but also to measure in some degree how well the underlying statistical model assumptions are met. By design, this measure naturally scales between 0 and 1, with a value of 0.5 for a situation in which the model perfectly matches the sample distribution both in terms of mean and standard deviation. We have used it in combination with other goodness-of-fit measures to derive a simple classification scheme to quantify how well a candidate ground-rnotion-prediction equation models a particular set of observed-response spectra. This scheme is demonstrated to perform well in recognizing a number of popular ground-motion models from their rock-site- recording, subsets. This indicates its potential for aiding the assignment of logic-tree weights in a consistent and reproducible way. We have applied our scheme to the border region of France, Germany, and Switzerland where the M-w 4.8 St. Die earthquake of 22 February 2003 in eastern France recently provided a small set of observed-response spectra. These records are best modeled by the ground-motion-prediction equation of Berge-Thierry et al. (2003), which is based on the analysis of predominantly European data. The fact that the Swiss model of Bay et al. (2003) is not able to model the observed records in an acceptable way may indicate general problems arising from the use of weak-motion data for strong-motion prediction
In recent years, H/V measurements have been increasingly used to map the thickness of sediment fill in sedimentary basins in the context of seismic hazard assessment. This parameter is believed to be an important proxy for the site effects in sedimentary basins (e.g. in the Los Angeles basin). Here we present the results of a test using this approach across an active normal fault in a structurally well known situation. Measurements on a 50 km long profile with 1 km station spacing clearly show a change in the frequency of the fundamental peak of H/V ratios with increasing thickness of the sediment layer in the eastern part of the Lower Rhine Embayment. Subsequently, a section of 10 km length across the Erft-Sprung system, a normal fault with ca. 750 m vertical offset, was measured with a station distance of 100 m. Frequencies of the first and second peaks and the first trough in the H/V spectra are used in a simple resonance model to estimate depths of the bedrock. While the frequency of the first peak shows a large scatter for sediment depths larger than ca. 500 m, the frequency of the first trough follows the changing thickness of the sediments across the fault. The lateral resolution is in the range of the station distance of 100 m. A power law for the depth dependence of the S-wave velocity derived from down hole measurements in an earlier study [Budny, 1984] and power laws inverted from dispersion analysis of micro array measurements [Scherbaum et al., 2002] agree with the results from the H/V ratios of this study
One of the major challenges in engineering seismology is the reliable prediction of site-specific ground motion for particular earthquakes, observed at specific distances. For larger events, a special problem arises, at short distances, with the source-to-site distance measure, because distance metrics based on a point-source model are no longer appropriate. As a consequence, different attenuation relations differ in the distance metric that they use. In addition to being a source of confusion, this causes problems to quantitatively compare or combine different ground- motion models; for example, in the context of Probabilistic Seismic Hazard Assessment, in cases where ground-motion models with different distance metrics occupy neighboring branches of a logic tree. In such a situation, very crude assumptions about source sizes and orientations often have to be used to be able to derive an estimate of the particular metric required. Even if this solves the problem of providing a number to put into the attenuation relation, a serious problem remains. When converting distance measures, the corresponding uncertainties map onto the estimated ground motions according to the laws of error propagation. To make matters worse, conversion of distance metrics can cause the uncertainties of the adapted ground-motion model to become magnitude and distance dependent, even if they are not in the original relation. To be able to treat this problem quantitatively, the variability increase caused by the distance metric conversion has to be quantified. For this purpose, we have used well established scaling laws to determine explicit distance conversion relations using regression analysis on simulated data. We demonstrate that, for all practical purposes, most popular distance metrics can be related to the Joyner-Boore distance using models based on gamma distributions to express the shape of some "residual function." The functional forms are magnitude and distance dependent and are expressed as polynomials. We compare the performance of these relations with manually derived individual distance estimates for the Landers, the Imperial Valley, and the Chi-Chi earthquakes
The ellipticity of Rayleigh surface waves, which is an important parameter characterizing the propagation medium, is studied for several models with increasing complexity. While the main focus lies on theory, practical implications of the use of the horizontal to vertical component ratio (H/V-ratio) to Study the subsurface structure are considered as well. Love's approximation of the ellipticity for an incompressible layer over an incompressible half-space is critically discussed especially concerning its applicability for different impedance contrasts. The main result is an analytically exact formula of H/V for a 2-layer model of compressible media, which is a generalization of Love's formula. It turns out that for a limited range of models Love's approximation can be used also in the general case. (C) 2003 Elsevier B.V. All rights reserved
The combined passive and active seismic TRANSALP experiment produced an unprecedented high-resolution crustal image of the Eastern Alps between Munich and Venice. The European and Adriatic Mohos (EM and AM, respectively) are clearly imaged with different seismic techniques: near-vertical incidence reflections and receiver functions (RFs). The European Moho dips gently southward from 35 km beneath the northern foreland to a maximum depth of 55 km beneath the central part of the Eastern Alps, whereas the Adriatic Moho is imaged primarily by receiver functions at a relatively constant depth of about 40 km. In both data sets, we have also detected first-order Alpine shear zones, such as the Helvetic detachment, Inntal fault and SubTauern ramp in the north. Apart from the Valsugana thrust, receiver functions in the southern part of the Eastern Alps have also observed a north dipping interface, which may penetrate the entire Adriatic crust [Adriatic Crust Interface (ACI)]. Deep crustal seismicity may be related to the ACI. We interpret the ACI as the currently active retroshear zone in the doubly vergent Alpine collisional belt. (C) 2004 Elsevier B.V. All rights reserved
To address one of the central questions of plate tectonics-How do large transform systems work and what are their typical features?-seismic investigations across the Dead Sea Transform (DST), the boundary between the African and Arabian plates in the Middle East, were conducted for the first time. A major component of these investigations was a combined reflection/ refraction survey across the territories of Palestine, Israel and Jordan. The main results of this study are: (1) The seismic basement is offset by 3-5 km under the DST, (2) The DST cuts through the entire crust, broadening in the lower crust, (3) Strong lower crustal reflectors are imaged only on one side of the DST, (4) The seismic velocity sections show a steady increase in the depth of the crust-mantle transition (Moho) from 26 km at the Mediterranean to 39 km under the Jordan highlands, with only a small but visible, asymmetric topography of the Moho under the DST. These observations can be linked to the left-lateral movement of 105 km of the two plates in the last 17 Myr, accompanied by strong deformation within a narrow zone cutting through the entire crust. Comparing the DST and the San Andreas Fault (SAF) system, a strong asymmetry in subhorizontal lower crustal reflectors and a deep reaching deformation zone both occur around the DST and the SAF. The fact that such lower crustal reflectors and deep deformation zones are observed in such different transform systems suggests that these structures are possibly fundamental features of large transform plate boundaries
This study presents results of ambient noise measurements from temporary single station and small-scale array deployments in the northeast of Basle. H/V spectral ratios were determined along various profiles crossing the eastern masterfault of the Rhine Rift Valley and the adjacent sedimentary rift fills. The fundamental H/V peak frequencies are decreasing along the profile towards the eastern direction being consistent with the dip of the tertiary sediments within the rift. Using existing empirical relationships between H/V frequency peaks and the depth of the dominant seismic contrast, derived on basis of the lambda/4-resonance hypothesis and a power law depth dependence of the S-wave velocity, we obtain thicknesses of the rift fill from about 155 m in the west to 280 in in the east. This is in agreement with previous studies. The array analysis of the ambient noise wavefield yielded a stable dispersion relation consistent with Rayleigh wave propagation velocities. We conclude that a significant amount of surface waves is contained in the observed wavefield. The computed ellipticity for fundamental mode Rayleigh waves for the velocity depth models used for the estimation of the sediment thicknesses is in agreement with the observed H/V spectra over a large frequency band
An important task of seismic hazard assessment consists of estimating the rate of seismic moment release which is correlated to the rate of tectonic deformation and the seismic coupling. However, the estimations of deformation depend on the type of information utilized (e.g. geodetic, geological, seismic) and include large uncertainties. We therefore estimate the deformation rate in the Lower Rhine Embayment (LRE), Germany, using an integrated approach where the uncertainties have been systematically incorporated. On the basis of a new homogeneous earthquake catalogue we initially determine the frequency-magnitude distribution by statistical methods. In particular, we focus on an adequate estimation of the upper bound of the Gutenberg-Richter relation and demonstrate the importance of additional palaeoseis- mological information. The integration of seismological and geological information yields a probability distribution of the upper bound magnitude. Using this distribution together with the distribution of Gutenberg-Richter a and b values, we perform Monte Carlo simulations to derive the seismic moment release as a function of the observation time. The seismic moment release estimated from synthetic earthquake catalogues with short catalogue length is found to systematically underestimate the long-term moment rate which can be analytically determined. The moment release recorded in the LRE over the last 250 yr is found to be in good agreement with the probability distribution resulting from the Monte Carlo simulations. Furthermore, the long-term distribution is within its uncertainties consistent with the moment rate derived by geological measurements, indicating an almost complete seismic coupling in this region. By means of Kostrov's formula, we additionally calculate the full deformation rate tensor using the distribution of known focal mechanisms in LRE. Finally, we use the same approach to calculate the seismic moment and the deformation rate for two subsets of the catalogue corresponding to the east- and west-dipping faults, respectively
Composite ground-motion models and logic trees: Methodology, sensitivities, and uncertainties
(2005)
Logic trees have become a popular tool in seismic hazard studies. Commonly, the models corresponding to the end branches of the complete logic tree in a probabalistic seismic hazard analysis (PSHA) are treated separately until the final calculation of the set of hazard curves. This comes at the price that information regarding sensitivities and uncertainties in the ground-motion sections of the logic tree are only obtainable after disaggregation. Furthermore, from this end-branch model perspective even the designers of the logic tree cannot directly tell what ground-motion scenarios most likely would result from their logic trees for a given earthquake at a particular distance, nor how uncertain these scenarios might be or how they would be affected by the choices of the hazard analyst. On the other hand, all this information is already implicitly present in the logic tree. Therefore, with the ground-motion perspective that we propose in the present article, we treat the ground-motion sections of a complete logic tree for seismic hazard as a single composite model representing the complete state-of-knowledge-and-belief of a particular analyst on ground motion in a particular target region. We implement this view by resampling the ground-motion models represented in the ground-motion sections of the logic tree by Monte Carlo simulation (separately for the median values and the sigma values) and then recombining the sets of simulated values in proportion to their logic-tree branch weights. The quantiles of this resampled composite model provide the hazard analyst and the decision maker with a simple, clear, and quantitative representation of the overall physical meaning of the ground-motion section of a logic tree and the accompanying epistemic uncertainty. Quantiles of the composite model also provide an easy way to analyze the sensitivities and uncertainties related to a given logic-tree model. We illustrate this for a composite ground- motion model for central Europe. Further potential fields of applications are seen wherever individual best estimates of ground motion have to be derived from a set of candidate models, for example, for hazard rnaps, sensitivity studies, or for modeling scenario earthquakes
The PEGASOS project was a major international seismic hazard study, one of the largest ever conducted anywhere in the world, to assess seismic hazard at four nuclear power plant sites in Switzerland. Before the report of this project has become publicly available, a paper attacking both methodology and results has appeared. Since the general scientific readership may have difficulty in assessing this attack in the absence of the report being attacked, we supply a response in the present paper. The bulk of the attack, besides some misconceived arguments about the role of uncertainties in seismic hazard analysis, is carried by some exercises that purport to be validation exercises. In practice, they are no such thing; they are merely independent sets of hazard calculations based on varying assumptions and procedures, often rather questionable, which come up with various different answers which have no particular significance. (C) 2005 Elsevier B.V. All rights reserved
Logic trees are widely used in probabilistic seismic hazard analysis as a tool to capture the epistemic uncertainty associated with the seismogenic sources and the ground-motion prediction models used in estimating the hazard. Combining two or more ground-motion relations within a logic tree will generally require several conversions to be made, because there are several definitions available for both the predicted ground-motion parameters and the explanatory parameters within the predictive ground-motion relations. Procedures for making conversions for each of these factors are presented, using a suite of predictive equations in current use for illustration. The sensitivity of the resulting ground-motion models to these conversions is shown to be pronounced for some of the parameters, especially the measure of source-to-site distance, highlighting the need to take into account any incompatibilities among the selected equations. Procedures are also presented for assigning weights to the branches in the ground-motion section of the logic tree in a transparent fashion, considering both intrinsic merits of the individual equations and their degree of applicability to the particular application
Characterization of polarization attributes of seismic waves using continuous wavelet transforms
(2006)
Complex-trace analysis is the method of choice for analyzing polarized data. Because particle motion can be represented by instantaneous attributes that show distinct features for waves of different polarization characteristics, it can be used to separate and characterize these waves. Traditional methods of complex-trace analysis only give the instantaneous attributes as a function of time or frequency. However. for transient wave types or seismic events that overlap in time, an estimate of the polarization parameters requires analysis of the time-frequency dependence of these attributes. We propose a method to map instantaneous polarization attributes of seismic signals in the wavelet domain and explicitly relate these attributes with the wavelet-transform coefficients of the analyzed signal. We compare our method with traditional complex-trace analysis using numerical examples. An advantage of our method is its possibility of performing the complete wave-mode separation/ filtering process in the wavelet domain and its ability to provide the frequency dependence of ellipticity, which contains important information on the subsurface structure. Furthermore, using 2-C synthetic and real seismic shot gathers, we show how to use the method to separate different wave types and identify zones of interfering wave modes
The deterministic calculation of earthquake scenarios using complete waveform modelling plays an increasingly important role in estimating shaking hazard in seismically active regions. Here we apply 3-D numerical modelling of seismic wave propagation to M 6+ earthquake scenarios in the area of the Lower Rhine Embayment, one of the seismically most active regions in central Europe. Using a 3-D basin model derived from geology, borehole information and seismic experiments, we aim at demonstrating the strong dependence of ground shaking on hypocentre location and basin structure. The simulations are carried out up to frequencies of ca. 1 Hz. As expected, the basin structure leads to strong lateral variations in peak ground motion, amplification and shaking duration. Depending on source-basin-receiver geometry, the effects correlate with basin depth and the slope of the basin flanks; yet, the basin also affects peak ground motion and estimated shaking hazard thereof outside the basin. Comparison with measured seismograms for one of the earthquakes shows that some of the main characteristics of the wave motion are reproduced. Cumulating the derived seismic intensities from the three modelled earthquake scenarios leads to a predominantly basin correlated intensity distribution for our study area
The statistics of time delays between successive earthquakes has recently been claimed to be universal and to show the existence of clustering beyond the duration of aftershock bursts. We demonstrate that these claims are unjustified. Stochastic simulations with Poissonian background activity and triggered Omori-type aftershock sequences are shown to reproduce the interevent-time distributions observed on different spatial and magnitude scales in California. Thus the empirical distribution can be explained without any additional long-term clustering. Furthermore, we find that the shape of the interevent-time distribution, which can be approximated by the gamma distribution, is determined by the percentage of main-shocks in the catalog. This percentage can be calculated by the mean and variance of the interevent times and varies between 5% and 90% for different regions in California. Our investigation of stochastic simulations indicates that the interevent-time distribution provides a nonparametric reconstruction of the mainshock magnitude-frequency distribution that is superior to standard declustering algorithm
In this paper, two sets of earthquake ground-motion relations to estimate peak ground and response spectral acceleration are developed for sites in southern Spain and in southern Norway using a recently published composite approach. For this purpose seven empirical ground-motion relations developed from recorded strong-motion data from different parts of the world were employed. The different relations were first adjusted based on a number of transformations to convert the differing choices of independent parameters to a single one. After these transformations, which include the scatter introduced, were performed, the equations were modified to account for differences between the host and the target regions using the stochastic method to compute the host-to-target conversion factors. Finally functions were fitted to the derived ground-motion estimates to obtain sets of seven individual equations for use in probabilistic seismic hazard assessment for southern Spain and southern Norway. The relations are compared with local ones published for the two regions. The composite methodology calls for the setting up of independent logic trees for the median values and for the sigma values, in order to properly separate epistemic and aleatory uncertainties after the corrections and the conversions
In low-seismicity regions, such as France or Germany, the estimation of probabilistic seismic hazard must cope with the difficult identification of active faults and with the low amount of seismic data available. Since the probabilistic hazard method was initiated, most studies assume a Poissonian occurrence of earthquakes. Here we propose a method that enables the inclusion of time and space dependences between earthquakes into the probabilistic estimation of hazard. Combining the seismicity model Epidemic Type Aftershocks-Sequence (ETAS) with a Monte Carlo technique, aftershocks are naturally accounted for in the hazard determination. The method is applied to the Pyrenees region in Southern France. The impact on hazard of declustering and of the usual assumption that earthquakes occur according to a Poisson process is quantified, showing that aftershocks contribute on average less than 5 per cent to the probabilistic hazard, with an upper bound around 18 per cent
The estimation of minimum-misfit stochastic models from empirical ground-motion prediction equations
(2006)
In areas of moderate to low seismic activity there is commonly a lack of recorded strong ground motion. As a consequence, the prediction of ground motion expected for hypothetical future earthquakes is often performed by employing empirical models from other regions. In this context, Campbell's hybrid empirical approach (Campbell, 2003, 2004) provides a methodological framework to adapt ground-motion prediction equations to arbitrary target regions by using response spectral host-to-target-region-conversion filters. For this purpose, the empirical ground-motion prediction equation has to be quantified in terms of a stochastic model. The problem we address here is how to do this in a systematic way and how to assess the corresponding uncertainties. For the determination of the model parameters we use a genetic algorithm search. The stochastic model spectra were calculated by using a speed-optimized version of SMSIM (Boore, 2000). For most of the empirical ground-motion models, we obtain sets of stochastic models that match the empirical models within the full magnitude and distance ranges of their generating data sets fairly well. The overall quality of fit and the resulting model parameter sets strongly depend on the particular choice of the distance metric used for the stochastic model. We suggest the use of the hypocentral distance metric for the stochastic Simulation of strong ground motion because it provides the lowest-misfit stochastic models for most empirical equations. This is in agreement with the results of two recent studies of hypocenter locations in finite-source models which indicate that hypocenters are often located close to regions of large slip (Mai et al., 2005; Manighetti et al., 2005). Because essentially all empirical ground-motion prediction equations contain data from different geographical regions, the model parameters corresponding to the lowest-misfit stochastic models cannot necessarily be expected to represent single, physically realizable host regions but to model the generating data sets in an average way. In addition, the differences between the lowest-misfit stochastic models and the empirical ground-motion prediction equation are strongly distance, magnitude, and frequency dependent, which, according to the laws of uncertainty propagation, will increase the variance of the corresponding hybrid empirical model predictions (Scherbaum et al., 2005). As a consequence, the selection of empirical ground-motion models for host-to-target-region conversions requires considerable judgment of the ground-motion analyst
In the estimate of dispersion with the help of wavelet analysis considerable emphasis has been put on the extraction of the group velocity using the modulus of the wavelet transform. In this paper we give an asymptotic expression of the full propagator in wavelet space that comprises the phase velocity as well. This operator establishes a relationship between the observed signals at two different stations during wave propagation in a dispersive and attenuating medium. Numerical and experimental examples are presented to show that the method accurately models seismic wave dispersion and attenuation
The most recent intense earthquake swarm in the Vogtland lasted from 6 October 2008 until January 2009. Greatest magnitudes exceeded M3.5 several times in October making it the greatest swarm since 1985/86. In contrast to the swarms in 1985 and 2000, seismic moment release was concentrated near swarm onset. Focal area and temporal evolution are similar to the swarm in 2000. Work hypothysis: uprising upper-mantle fluids trigger swarm earthquakes at low stress level. To monitor the seismicity, the University of Potsdam operated a small aperture seismic array at 10 km epicentral distance between 18 October 2008 and 18 March 2009. Consisting of 12 seismic stations and 3 additional microphones, the array is capable of detecting earthquakes from larger to very low magnitudes (M<-1) as well as associated air waves. We use array techniques to determine properties of the incoming wavefield: noise, direct P and S waves, and converted phases.
Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006