Refine
Year of publication
Document Type
- Article (41)
- Other (2)
- Conference Proceeding (1)
- Doctoral Thesis (1)
- Habilitation Thesis (1)
- Postprint (1)
- Review (1)
Keywords
We show how the maximum magnitude within a predefined future time horizon may be estimated from an earthquake catalog within the context of Gutenberg-Richter statistics. The aim is to carry out a rigorous uncertainty assessment, and calculate precise confidence intervals based on an imposed level of confidence a. In detail, we present a model for the estimation of the maximum magnitude to occur in a time interval T-f in the future, given a complete earthquake catalog for a time period T in the past and, if available, paleoseismic events. For this goal, we solely assume that earthquakes follow a stationary Poisson process in time with unknown productivity Lambda and obey the Gutenberg-Richter law in magnitude domain with unknown b-value. The random variables. and b are estimated by means of Bayes theorem with noninformative prior distributions. Results based on synthetic catalogs and on retrospective calculations of historic catalogs from the highly active area of Japan and the low-seismicity, but high-risk region lower Rhine embayment (LRE) in Germany indicate that the estimated magnitudes are close to the true values. Finally, we discuss whether the techniques can be extended to meet the safety requirements for critical facilities such as nuclear power plants. For this aim, the maximum magnitude for all times has to be considered. In agreement with earlier work, we find that this parameter is not a useful quantity from the viewpoint of statistical inference.
Earthquake catalogs are probably the most informative data source about spatiotemporal seismicity evolution. The catalog quality in one of the most active seismogenic zones in the world, Japan, is excellent, although changes in quality arising, for example, from an evolving network are clearly present. Here, we seek the best estimate for the largest expected earthquake in a given future time interval from a combination of historic and instrumental earthquake catalogs. We extend the technique introduced by Zoller et al. (2013) to estimate the maximum magnitude in a time window of length T-f for earthquake catalogs with varying level of completeness. In particular, we consider the case in which two types of catalogs are available: a historic catalog and an instrumental catalog. This leads to competing interests with respect to the estimation of the two parameters from the Gutenberg-Richter law, the b-value and the event rate lambda above a given lower-magnitude threshold (the a-value). The b-value is estimated most precisely from the frequently occurring small earthquakes; however, the tendency of small events to cluster in aftershocks, swarms, etc. violates the assumption of a Poisson process that is used for the estimation of lambda. We suggest addressing conflict by estimating b solely from instrumental seismicity and using large magnitude events from historic catalogs for the earthquake rate estimation. Applying the method to Japan, there is a probability of about 20% that the maximum expected magnitude during any future time interval of length T-f = 30 years is m >= 9.0. Studies of different subregions in Japan indicates high probabilities for M 8 earthquakes along the Tohoku arc and relatively low probabilities in the Tokai, Tonankai, and Nankai region. Finally, for scenarios related to long-time horizons and high-confidence levels, the maximum expected magnitude will be around 10.
The first step in the estimation of probabilistic seismic hazard in a region commonly consists of the definition and characterization of the relevant seismic sources. Because in low-seismicity regions seismicity is often rather diffuse and faults are difficult to identify, large areal source zones are mostly used. The corresponding hypothesis is that seismicity is uniformly distributed inside each areal seismic source zone. In this study, the impact of this hypothesis on the probabilistic hazard estimation is quantified through the generation of synthetic spatial seismicity distributions. Fractal seismicity distributions are generated inside a given source zone and probabilistic hazard is computed for a set of sites located inside this zone. In our study, the impact of the spatial seismicity distribution is defined as the deviation from the hazard value obtained for a spatially uniform seismicity distribution. From the generation of a large number of synthetic distributions, the correlation between the fractal dimension D and the impact is derived. The results show that the assumption of spatially uniform seismicity tends to bias the hazard to higher values. The correlation can be used to determine the systematic biases and uncertainties for hazard estimations in real cases, where the fractal dimension has been determined. We apply the technique in Germany (Cologne area) and in France (Alps).
Groningen is the largest onshore gas field under production in Europe. The pressure depletion of the gas field started in 1963. In 1991, the first induced micro-earthquakes have been located at reservoir level with increasing rates in the following decades. Most of these events are of magnitude less than 2.0 and cannot be felt. However, maximum observed magnitudes continuously increased over the years until the largest, significant event with ML=3.6 was recorded in 2014, which finally led to the decision to reduce the production. This causal sequence displays the crucial role of understanding and modeling the relation between production and induced seismicity for economic planing and hazard assessment. Here we test whether the induced seismicity related to gas exploration can be modeled by the statistical response of fault networks with rate-and-state-dependent frictional behavior. We use the long and complete local seismic catalog and additionally detailed information on production-induced changes at the reservoir level to test different seismicity models. Both the changes of the fluid pressure and of the reservoir compaction are tested as input to approximate the Coulomb stress changes. We find that the rate-and-state model with a constant tectonic background seismicity rate can reproduce the observed long delay of the seismicity onset. In contrast, so-called Coulomb failure models with instantaneous earthquake nucleation need to assume that all faults are initially far from a critical state of stress to explain the delay. Our rate-and-state model based on the fluid pore pressure fits the spatiotemporal pattern of the seismicity best, where the fit further improves by taking the fault density and orientation into account. Despite its simplicity with only three free parameters, the rate-and-state model can reproduce the main statistical features of the observed activity.
Both aftershocks and geodetically measured postseismic displacements are important markers of the stress relaxation process following large earthquakes. Postseismic displacements can be related to creep-like relaxation in the vicinity of the coseismic rupture by means of inversion methods. However, the results of slip inversions are typically non-unique and subject to large uncertainties. Therefore, we explore the possibility to improve inversions by mechanical constraints. In particular, we take into account the physical understanding that postseismic deformation is stress-driven, and occurs in the coseismically stressed zone. We do joint inversions for coseismic and postseismic slip in a Bayesian framework in the case of the 2004 M6.0 Parkfield earthquake. We perform a number of inversions with different constraints, and calculate their statistical significance. According to information criteria, the best result is preferably related to a physically reasonable model constrained by the stress-condition (namely postseismic creep is driven by coseismic stress) and the condition that coseismic slip and large aftershocks are disjunct. This model explains 97% of the coseismic displacements and 91% of the postseismic displacements during day 1-5 following the Parkfield event, respectively. It indicates that the major postseismic deformation can be generally explained by a stress relaxation process for the Parkfield case. This result also indicates that the data to constrain the coseismic slip model could be enriched postseismically. For the 2004 Parkfield event, we additionally observe asymmetric relaxation process at the two sides of the fault, which can be explained by material contrast ratio across the fault of similar to 1.15 in seismic velocity.
The aim of this paper is to characterize the spatio-temporal distribution of Central-Europe seismicity. Specifically, by using a non-parametric statistical approach, the proportional hazard model, leading to an empirical estimation of the hazard function, we provide some constrains on the time behavior of earthquake generation mechanisms. The results indicate that the most conspicuous characteristics of M-w 4.0+ earthquakes is a temporal clustering lasting a couple of years. This suggests that the probability of occurrence increases immediately after a previous event. After a few years, the process becomes almost time independent. Furthermore, we investigate the cluster properties of the seismicity of Central-Europe, by comparing the obtained result with the one of synthetic catalogs generated by the epidemic type aftershock sequences (ETAS) model, which previously have been successfully applied for short term clustering. Our results indicate that the ETAS is not well suited to describe the seismicity as a whole, while it is able to capture the features of the short- term behaviour. Remarkably, similar results have been previously found for Italy using a higher magnitude threshold.
Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
Similar power laws for foreshock and aftershock sequences in a spring block model for earthquakes
(1999)
Earthquake swarms are often assumed to result from an intrusion of fluids into the seismogenic zone, causing seismicity patterns which significantly differ from aftershock sequences. But neither the temporal evolution nor the energy release of earthquake swarms is generally well understood. Because of the lack of descriptive empirical laws, the comparison with model simulations is typically restricted to aspects of the overall behaviour such as the frequency- magnitude distribution. However, previous investigations into a large earthquake swarm which occurred in the year 2000 in Vogtland/northwest Bohemia, Central Europe, revealed some well-defined characteristics which allow a rigorous test of model assumptions. In this study, simulations are performed of a discretized fault plane embedded in a 3-D elastic half- space. Earthquakes are triggered by fluid intrusion as well as by co-seismic and post-seismic stress changes. The model is able to reproduce the main observations, such as the fractal temporal occurrence of earthquakes, embedded aftershock sequences, and a power-law increase of the average seismic moment release. All these characteristics are found to result from stress triggering, whereas fluid diffusion is manifested in the spatiotemporal spreading of the hypocentres
Seismic quiescence as an indicator for large earthquakes in a system of self-organized criticality
(2000)
In public perception, abnormal animal behavior is widely assumed to be a potential earthquake precursor, in strong contrast to the viewpoint in natural sciences. Proponents of earthquake prediction via animals claim that animals feel and react abnormally to small changes in environmental and physico-chemical parameters related to the earthquake preparation process. In seismology, however, observational evidence for changes of physical parameters before earthquakes is very weak. In this study, we reviewed 180 publications regarding abnormal animal behavior before earthquakes and analyze and discuss them with respect to (1) magnitude-distance relations, (2) foreshock activity, and (3) the quality and length of the published observations. More than 700 records of claimed animal precursors related to 160 earthquakes are reviewed with unusual behavior of more than 130 species. The precursor time ranges from months to seconds prior to the earthquakes, and the distances from a few to hundreds of kilometers. However, only 14 time series were published, whereas all other records are single observations. The time series are often short (the longest is 1 yr), or only small excerpts of the full data set are shown. The probability density of foreshocks and the occurrence of animal precursors are strikingly similar, suggesting that at least parts of the reported animal precursors are in fact related to foreshocks. Another major difficulty for a systematic and statistical analysis is the high diversity of data, which are often only anecdotal and retrospective. The study clearly demonstrates strong weaknesses or even deficits in many of the published reports on possible abnormal animal behavior. To improve the research on precursors, we suggest a scheme of yes and no questions to be assessed to ensure the quality of such claims.
We present a Bayesian method that allows continuous updating the aperiodicity of the recurrence time distribution of large earthquakes based on a catalog with magnitudes above a completeness threshold. The approach uses a recently proposed renewal model for seismicity and allows the inclusion of magnitude uncertainties in a straightforward manner. Errors accounting for grouped magnitudes and random errors are studied and discussed. The results indicate that a stable and realistic value of the aperiodicity can be predicted in an early state of seismicity evolution, even though only a small number of large earthquakes has occurred to date. Furthermore, we demonstrate that magnitude uncertainties can drastically influence the results and can therefore not be neglected. We show how to correct for the bias caused by magnitude errors. For the region of Parkfield we find that the aperiodicity, or the coefficient of variation, is clearly higher than in studies which are solely based on the large earthquakes.
Various techniques are utilized by the seismological community, extractive industries, energy and geoengineering companies to identify earthquake nucleation processes in close proximity to engineering operation points. These operations may comprise fluid extraction or injections, artificial water reservoir impoundments, open pit and deep mining, deep geothermal power generations or carbon sequestration. In this letter to the editor, we outline several lines of investigation that we suggest to follow to address the discrimination problem between natural seismicity and seismic events induced or triggered by geoengineering activities. These suggestions have been developed by a group of experts during several meetings and workshops, and we feel that their publication as a summary report is helpful for the geoscientific community. Specific investigation procedures and discrimination approaches, on which our recommendations are based, are also published in this Special Issue (SI) of Journal of Seismology.
In recent years, the triggering of earthquakes has been discussed controversially with respect to the underlying mechanisms and the capability to evaluate the resulting seismic hazard. Apart from static stress interactions, other mechanisms including dynamic stress transfer have been proposed to be part of a complex triggering process. Exploiting the theoretical relation between long-term earthquake rates and stressing rate, we demonstrate that static stress changes resulting from an earthquake rupture allow us to predict quantitatively the aftershock activity without tuning specific model parameters. These forecasts are found to be in excellent agreement with all first-order characteristics of aftershocks, in particular, (1) the total number, (2) the power law distance decay, (3) the scaling of the productivity with the main shock magnitude, (4) the foreshock probability, and (5) the empirical Bath law providing the maximum aftershock magnitude, which supports the conclusion that static stress transfer is the major mechanism of earthquake triggering.
In low-seismicity regions, such as France or Germany, the estimation of probabilistic seismic hazard must cope with the difficult identification of active faults and with the low amount of seismic data available. Since the probabilistic hazard method was initiated, most studies assume a Poissonian occurrence of earthquakes. Here we propose a method that enables the inclusion of time and space dependences between earthquakes into the probabilistic estimation of hazard. Combining the seismicity model Epidemic Type Aftershocks-Sequence (ETAS) with a Monte Carlo technique, aftershocks are naturally accounted for in the hazard determination. The method is applied to the Pyrenees region in Southern France. The impact on hazard of declustering and of the usual assumption that earthquakes occur according to a Poisson process is quantified, showing that aftershocks contribute on average less than 5 per cent to the probabilistic hazard, with an upper bound around 18 per cent
Earthquake rates are driven by tectonic stress buildup, earthquake-induced stress changes, and transient aseismic processes. Although the origin of the first two sources is known, transient aseismic processes are more difficult to detect. However, the knowledge of the associated changes of the earthquake activity is of great interest, because it might help identify natural aseismic deformation patterns such as slow-slip events, as well as the occurrence of induced seismicity related to human activities. For this goal, we develop a Bayesian approach to identify change-points in seismicity data automatically. Using the Bayes factor, we select a suitable model, estimate possible change-points, and we additionally use a likelihood ratio test to calculate the significance of the change of the intensity. The approach is extended to spatiotemporal data to detect the area in which the changes occur. The method is first applied to synthetic data showing its capability to detect real change-points. Finally, we apply this approach to observational data from Oklahoma and observe statistical significant changes of seismicity in space and time.