Refine
Year of publication
Document Type
- Article (39)
- Other (2)
- Postprint (2)
- Conference Proceeding (1)
- Doctoral Thesis (1)
- Habilitation Thesis (1)
Language
- English (46)
Keywords
- statistical seismology (3)
- Bayesian (2)
- Earthquake interaction (2)
- Germany (2)
- Iran (2)
- Level of confidence (2)
- Maximum expected earthquake magnitude (2)
- Seismicity and tectonics (2)
- Statistical seismology (2)
- damage (2)
Die vorliegende Arbeit beschäftigt sich mit der Charakterisierung von Seismizität anhand von Erdbebenkatalogen. Es werden neue Verfahren der Datenanalyse entwickelt, die Aufschluss darüber geben sollen, ob der seismischen Dynamik ein stochastischer oder ein deterministischer Prozess zugrunde liegt und was daraus für die Vorhersagbarkeit starker Erdbeben folgt. Es wird gezeigt, dass seismisch aktive Regionen häufig durch nichtlinearen Determinismus gekennzeichent sind. Dies schließt zumindest die Möglichkeit einer Kurzzeitvorhersage ein. Das Auftreten seismischer Ruhe wird häufig als Vorläuferphaenomen für starke Erdbeben gedeutet. Es wird eine neue Methode präsentiert, die eine systematische raumzeitliche Kartierung seismischer Ruhephasen ermöglicht. Die statistische Signifikanz wird mit Hilfe des Konzeptes der Ersatzdaten bestimmt. Als Resultat erhält man deutliche Korrelationen zwischen seismischen Ruheperioden und starken Erdbeben. Gleichwohl ist die Signifikanz dafür nicht hoch genug, um eine Vorhersage im Sinne einer Aussage über den Ort, die Zeit und die Stärke eines zu erwartenden Hauptbebens zu ermöglichen.
Similar power laws for foreshock and aftershock sequences in a spring block model for earthquakes
(1999)
Seismic quiescence as an indicator for large earthquakes in a system of self-organized criticality
(2000)
The occurrence of earthquakes is characterized by a high degree of spatiotemporal complexity. Although numerous patterns, e.g. fore- and aftershock sequences, are well-known, the underlying mechanisms are not observable and thus not understood. Because the recurrence times of large earthquakes are usually decades or centuries, the number of such events in corresponding data sets is too small to draw conclusions with reasonable statistical significance. Therefore, the present study combines both, numerical modeling and analysis of real data in order to unveil the relationships between physical mechanisms and observational quantities. The key hypothesis is the validity of the so-called "critical point concept" for earthquakes, which assumes large earthquakes to occur as phase transitions in a spatially extended many-particle system, similar to percolation models. New concepts are developed to detect critical states in simulated and in natural data sets. The results indicate that important features of seismicity like the frequency-size distribution and the temporal clustering of earthquakes depend on frictional and structural fault parameters. In particular, the degree of quenched spatial disorder (the "roughness") of a fault zone determines whether large earthquakes occur quasiperiodically or more clustered. This illustrates the power of numerical models in order to identify regions in parameter space, which are relevant for natural seismicity. The critical point concept is verified for both, synthetic and natural seismicity, in terms of a critical state which precedes a large earthquake: a gradual roughening of the (unobservable) stress field leads to a scale-free (observable) frequency-size distribution. Furthermore, the growth of the spatial correlation length and the acceleration of the seismic energy release prior to large events is found. The predictive power of these precursors is, however, limited. Instead of forecasting time, location, and magnitude of individual events, a contribution to a broad multiparameter approach is encouraging.
Introduction to special issue: Dynamics of seismicity patterns and earthquake triggering - Preface
(2006)
Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
We present a Bayesian method that allows continuous updating the aperiodicity of the recurrence time distribution of large earthquakes based on a catalog with magnitudes above a completeness threshold. The approach uses a recently proposed renewal model for seismicity and allows the inclusion of magnitude uncertainties in a straightforward manner. Errors accounting for grouped magnitudes and random errors are studied and discussed. The results indicate that a stable and realistic value of the aperiodicity can be predicted in an early state of seismicity evolution, even though only a small number of large earthquakes has occurred to date. Furthermore, we demonstrate that magnitude uncertainties can drastically influence the results and can therefore not be neglected. We show how to correct for the bias caused by magnitude errors. For the region of Parkfield we find that the aperiodicity, or the coefficient of variation, is clearly higher than in studies which are solely based on the large earthquakes.
In recent years, the triggering of earthquakes has been discussed controversially with respect to the underlying mechanisms and the capability to evaluate the resulting seismic hazard. Apart from static stress interactions, other mechanisms including dynamic stress transfer have been proposed to be part of a complex triggering process. Exploiting the theoretical relation between long-term earthquake rates and stressing rate, we demonstrate that static stress changes resulting from an earthquake rupture allow us to predict quantitatively the aftershock activity without tuning specific model parameters. These forecasts are found to be in excellent agreement with all first-order characteristics of aftershocks, in particular, (1) the total number, (2) the power law distance decay, (3) the scaling of the productivity with the main shock magnitude, (4) the foreshock probability, and (5) the empirical Bath law providing the maximum aftershock magnitude, which supports the conclusion that static stress transfer is the major mechanism of earthquake triggering.
Aftershock models are usually based either on purely empirical relations ignoring the physical mechanism or on deterministic calculations of stress changes on a predefined receiver fault orientation. Here we investigate the effect of considering more realistic fault systems in models based on static Coulomb stress changes. For that purpose, we perform earthquake simulations with elastic half-space stress interactions, rate-and-state dependent frictional earthquake nucleation, and extended ruptures with heterogeneous (fractal) slip distributions. We find that the consideration of earthquake nucleation on multiple receiver fault orientations does not influence the shape of the temporal Omori-type aftershock decay, but changes significantly the predicted spatial patterns and the total number of triggered events. So-called stress shadows with decreased activity almost vanish, and activation decays continuously with increasing distance from the main shock rupture. The total aftershock productivity, which is shown to be almost independent of the assumed background rate, increases significantly if multiple receiver fault planes exist. The application to the 1992 M7.3 Landers, California, aftershock sequence indicates a good agreement with the locations and the total productivity of the observed directly triggered aftershocks.
We discuss to what extent a given earthquake catalog and the assumption of a doubly truncated Gutenberg-Richter distribution for the earthquake magnitudes allow for the calculation of confidence intervals for the maximum possible magnitude M. We show that, without further assumptions such as the existence of an upper bound of M, only very limited information may be obtained. In a frequentist formulation, for each confidence level alpha the confidence interval diverges with finite probability. In a Bayesian formulation, the posterior distribution of the upper magnitude is not normalizable. We conclude that the common approach to derive confidence intervals from the variance of a point estimator fails. Technically, this problem can be overcome by introducing an upper bound (M) over tilde for the maximum magnitude. Then the Bayesian posterior distribution can be normalized, and its variance decreases with the number of observed events. However, because the posterior depends significantly on the choice of the unknown value of (M) over tilde, the resulting confidence intervals are essentially meaningless. The use of an informative prior distribution accounting for pre-knowledge of M is also of little use, because the prior is only modified in the case of the occurrence of an extreme event. Our results suggest that the maximum possible magnitude M should be better replaced by M(T), the maximum expected magnitude in a given time interval T, for which the calculation of exact confidence intervals becomes straightforward. From a physical point of view, numerical models of the earthquake process adjusted to specific fault regions may be a powerful alternative to overcome the shortcomings of purely statistical inference.
We develop a multigrid, multiple time stepping scheme to reduce computational efforts for calculating complex stress interactions in a strike-slip 2D planar fault for the simulation of seismicity. The key elements of the multilevel solver are separation of length scale, grid-coarsening, and hierarchy. In this study the complex stress interactions are split into two parts: the first with a small contribution is computed on a coarse level, and the rest for strong interactions is on a fine level. This partition leads to a significant reduction of the number of computations. The reduction of complexity is even enhanced by combining the multigrid with multiple time stepping. Computational efficiency is enhanced by a factor of 10 while retaining a reasonable accuracy, compared to the original full matrix-vortex multiplication. The accuracy of solution and computational efficiency depend on a given cut-off radius that splits multiplications into the two parts. The multigrid scheme is constructed in such a way that it conserves stress in the entire half-space.
Both aftershocks and geodetically measured postseismic displacements are important markers of the stress relaxation process following large earthquakes. Postseismic displacements can be related to creep-like relaxation in the vicinity of the coseismic rupture by means of inversion methods. However, the results of slip inversions are typically non-unique and subject to large uncertainties. Therefore, we explore the possibility to improve inversions by mechanical constraints. In particular, we take into account the physical understanding that postseismic deformation is stress-driven, and occurs in the coseismically stressed zone. We do joint inversions for coseismic and postseismic slip in a Bayesian framework in the case of the 2004 M6.0 Parkfield earthquake. We perform a number of inversions with different constraints, and calculate their statistical significance. According to information criteria, the best result is preferably related to a physically reasonable model constrained by the stress-condition (namely postseismic creep is driven by coseismic stress) and the condition that coseismic slip and large aftershocks are disjunct. This model explains 97% of the coseismic displacements and 91% of the postseismic displacements during day 1-5 following the Parkfield event, respectively. It indicates that the major postseismic deformation can be generally explained by a stress relaxation process for the Parkfield case. This result also indicates that the data to constrain the coseismic slip model could be enriched postseismically. For the 2004 Parkfield event, we additionally observe asymmetric relaxation process at the two sides of the fault, which can be explained by material contrast ratio across the fault of similar to 1.15 in seismic velocity.
We show how the maximum magnitude within a predefined future time horizon may be estimated from an earthquake catalog within the context of Gutenberg-Richter statistics. The aim is to carry out a rigorous uncertainty assessment, and calculate precise confidence intervals based on an imposed level of confidence a. In detail, we present a model for the estimation of the maximum magnitude to occur in a time interval T-f in the future, given a complete earthquake catalog for a time period T in the past and, if available, paleoseismic events. For this goal, we solely assume that earthquakes follow a stationary Poisson process in time with unknown productivity Lambda and obey the Gutenberg-Richter law in magnitude domain with unknown b-value. The random variables. and b are estimated by means of Bayes theorem with noninformative prior distributions. Results based on synthetic catalogs and on retrospective calculations of historic catalogs from the highly active area of Japan and the low-seismicity, but high-risk region lower Rhine embayment (LRE) in Germany indicate that the estimated magnitudes are close to the true values. Finally, we discuss whether the techniques can be extended to meet the safety requirements for critical facilities such as nuclear power plants. For this aim, the maximum magnitude for all times has to be considered. In agreement with earlier work, we find that this parameter is not a useful quantity from the viewpoint of statistical inference.
Time-dependent probabilistic seismic hazard assessment requires a stochastic description of earthquake occurrences. While short-term seismicity models are well-constrained by observations, the recurrences of characteristic on-fault earthquakes are only derived from theoretical considerations, uncertain palaeo-events or proxy data. Despite the involved uncertainties and complexity, simple statistical models for a quasi-period recurrence of on-fault events are implemented in seismic hazard assessments. To test the applicability of statistical models, such as the Brownian relaxation oscillator or the stress release model, we perform a systematic comparison with deterministic simulations based on rate- and state-dependent friction, high-resolution representations of fault systems and quasi-dynamic rupture propagation. For the specific fault network of the Lower Rhine Embayment, Germany, we run both stochastic and deterministic model simulations based on the same fault geometries and stress interactions. Our results indicate that the stochastic simulators are able to reproduce the first-order characteristics of the major earthquakes on isolated faults as well as for coupled faults with moderate stress interactions. However, we find that all tested statistical models fail to reproduce the characteristics of strongly coupled faults, because multisegment rupturing resulting from a spatiotemporally correlated stress field is underestimated in the stochastic simulators. Our results suggest that stochastic models have to be extended by multirupture probability distributions to provide more reliable results.
Convergence of the frequency-magnitude distribution of global earthquakes - maybe in 200 years
(2013)
I study the ability to estimate the tail of the frequency-magnitude distribution of global earthquakes. While power-law scaling for small earthquakes is accepted by support of data, the tail remains speculative. In a recent study, Bell et al. (2013) claim that the frequency-magnitude distribution of global earthquakes converges to a tapered Pareto distribution. I show that this finding results from data fitting errors, namely from the biased maximum likelihood estimation of the corner magnitude theta in strongly undersampled models. In particular, the estimation of theta depends solely on the few largest events in the catalog. Taking this into account, I compare various state-of-the-art models for the global frequency-magnitude distribution. After discarding undersampled models, the remaining ones, including the unbounded Gutenberg-Richter distribution, perform all equally well and are, therefore, indistinguishable. Convergence to a specific distribution, if it ever takes place, requires about 200 years homogeneous recording of global seismicity, at least.
We investigate spatio-temporal properties of earthquake patterns in the San Jacinto fault zone (SJFZ), California, between Cajon Pass and the Superstition Hill Fault, using a long record of simulated seismicity constrained by available seismological and geological data. The model provides an effective realization of a large segmented strike-slip fault zone in a 3D elastic half-space, with heterogeneous distribution of static friction chosen to represent several clear step-overs at the surface. The simulated synthetic catalog reproduces well the basic statistical features of the instrumental seismicity recorded at the SJFZ area since 1981. The model also produces events larger than those included in the short instrumental record, consistent with paleo-earthquakes documented at sites along the SJFZ for the last 1,400 years. The general agreement between the synthetic and observed data allows us to address with the long-simulated seismicity questions related to large earthquakes and expected seismic hazard. The interaction between m a parts per thousand yen 7 events on different sections of the SJFZ is found to be close to random. The hazard associated with m a parts per thousand yen 7 events on the SJFZ increases significantly if the long record of simulated seismicity is taken into account. The model simulations indicate that the recent increased number of observed intermediate SJFZ earthquakes is a robust statistical feature heralding the occurrence of m a parts per thousand yen 7 earthquakes. The hypocenters of the m a parts per thousand yen 5 events in the simulation results move progressively towards the hypocenter of the upcoming m a parts per thousand yen 7 earthquake.