Filtern
Erscheinungsjahr
Dokumenttyp
- Wissenschaftlicher Artikel (20)
- Sonstiges (2)
- Habilitation (1)
- Rezension (1)
Sprache
- Englisch (24)
Gehört zur Bibliographie
- ja (24) (entfernen)
Schlagworte
Institut
- Institut für Geowissenschaften (24) (entfernen)
The first step in the estimation of probabilistic seismic hazard in a region commonly consists of the definition and characterization of the relevant seismic sources. Because in low-seismicity regions seismicity is often rather diffuse and faults are difficult to identify, large areal source zones are mostly used. The corresponding hypothesis is that seismicity is uniformly distributed inside each areal seismic source zone. In this study, the impact of this hypothesis on the probabilistic hazard estimation is quantified through the generation of synthetic spatial seismicity distributions. Fractal seismicity distributions are generated inside a given source zone and probabilistic hazard is computed for a set of sites located inside this zone. In our study, the impact of the spatial seismicity distribution is defined as the deviation from the hazard value obtained for a spatially uniform seismicity distribution. From the generation of a large number of synthetic distributions, the correlation between the fractal dimension D and the impact is derived. The results show that the assumption of spatially uniform seismicity tends to bias the hazard to higher values. The correlation can be used to determine the systematic biases and uncertainties for hazard estimations in real cases, where the fractal dimension has been determined. We apply the technique in Germany (Cologne area) and in France (Alps).
Groningen is the largest onshore gas field under production in Europe. The pressure depletion of the gas field started in 1963. In 1991, the first induced micro-earthquakes have been located at reservoir level with increasing rates in the following decades. Most of these events are of magnitude less than 2.0 and cannot be felt. However, maximum observed magnitudes continuously increased over the years until the largest, significant event with ML=3.6 was recorded in 2014, which finally led to the decision to reduce the production. This causal sequence displays the crucial role of understanding and modeling the relation between production and induced seismicity for economic planing and hazard assessment. Here we test whether the induced seismicity related to gas exploration can be modeled by the statistical response of fault networks with rate-and-state-dependent frictional behavior. We use the long and complete local seismic catalog and additionally detailed information on production-induced changes at the reservoir level to test different seismicity models. Both the changes of the fluid pressure and of the reservoir compaction are tested as input to approximate the Coulomb stress changes. We find that the rate-and-state model with a constant tectonic background seismicity rate can reproduce the observed long delay of the seismicity onset. In contrast, so-called Coulomb failure models with instantaneous earthquake nucleation need to assume that all faults are initially far from a critical state of stress to explain the delay. Our rate-and-state model based on the fluid pore pressure fits the spatiotemporal pattern of the seismicity best, where the fit further improves by taking the fault density and orientation into account. Despite its simplicity with only three free parameters, the rate-and-state model can reproduce the main statistical features of the observed activity.
The aim of this paper is to characterize the spatio-temporal distribution of Central-Europe seismicity. Specifically, by using a non-parametric statistical approach, the proportional hazard model, leading to an empirical estimation of the hazard function, we provide some constrains on the time behavior of earthquake generation mechanisms. The results indicate that the most conspicuous characteristics of M-w 4.0+ earthquakes is a temporal clustering lasting a couple of years. This suggests that the probability of occurrence increases immediately after a previous event. After a few years, the process becomes almost time independent. Furthermore, we investigate the cluster properties of the seismicity of Central-Europe, by comparing the obtained result with the one of synthetic catalogs generated by the epidemic type aftershock sequences (ETAS) model, which previously have been successfully applied for short term clustering. Our results indicate that the ETAS is not well suited to describe the seismicity as a whole, while it is able to capture the features of the short- term behaviour. Remarkably, similar results have been previously found for Italy using a higher magnitude threshold.
Earthquake swarms are often assumed to result from an intrusion of fluids into the seismogenic zone, causing seismicity patterns which significantly differ from aftershock sequences. But neither the temporal evolution nor the energy release of earthquake swarms is generally well understood. Because of the lack of descriptive empirical laws, the comparison with model simulations is typically restricted to aspects of the overall behaviour such as the frequency- magnitude distribution. However, previous investigations into a large earthquake swarm which occurred in the year 2000 in Vogtland/northwest Bohemia, Central Europe, revealed some well-defined characteristics which allow a rigorous test of model assumptions. In this study, simulations are performed of a discretized fault plane embedded in a 3-D elastic half- space. Earthquakes are triggered by fluid intrusion as well as by co-seismic and post-seismic stress changes. The model is able to reproduce the main observations, such as the fractal temporal occurrence of earthquakes, embedded aftershock sequences, and a power-law increase of the average seismic moment release. All these characteristics are found to result from stress triggering, whereas fluid diffusion is manifested in the spatiotemporal spreading of the hypocentres
In public perception, abnormal animal behavior is widely assumed to be a potential earthquake precursor, in strong contrast to the viewpoint in natural sciences. Proponents of earthquake prediction via animals claim that animals feel and react abnormally to small changes in environmental and physico-chemical parameters related to the earthquake preparation process. In seismology, however, observational evidence for changes of physical parameters before earthquakes is very weak. In this study, we reviewed 180 publications regarding abnormal animal behavior before earthquakes and analyze and discuss them with respect to (1) magnitude-distance relations, (2) foreshock activity, and (3) the quality and length of the published observations. More than 700 records of claimed animal precursors related to 160 earthquakes are reviewed with unusual behavior of more than 130 species. The precursor time ranges from months to seconds prior to the earthquakes, and the distances from a few to hundreds of kilometers. However, only 14 time series were published, whereas all other records are single observations. The time series are often short (the longest is 1 yr), or only small excerpts of the full data set are shown. The probability density of foreshocks and the occurrence of animal precursors are strikingly similar, suggesting that at least parts of the reported animal precursors are in fact related to foreshocks. Another major difficulty for a systematic and statistical analysis is the high diversity of data, which are often only anecdotal and retrospective. The study clearly demonstrates strong weaknesses or even deficits in many of the published reports on possible abnormal animal behavior. To improve the research on precursors, we suggest a scheme of yes and no questions to be assessed to ensure the quality of such claims.
Various techniques are utilized by the seismological community, extractive industries, energy and geoengineering companies to identify earthquake nucleation processes in close proximity to engineering operation points. These operations may comprise fluid extraction or injections, artificial water reservoir impoundments, open pit and deep mining, deep geothermal power generations or carbon sequestration. In this letter to the editor, we outline several lines of investigation that we suggest to follow to address the discrimination problem between natural seismicity and seismic events induced or triggered by geoengineering activities. These suggestions have been developed by a group of experts during several meetings and workshops, and we feel that their publication as a summary report is helpful for the geoscientific community. Specific investigation procedures and discrimination approaches, on which our recommendations are based, are also published in this Special Issue (SI) of Journal of Seismology.
In low-seismicity regions, such as France or Germany, the estimation of probabilistic seismic hazard must cope with the difficult identification of active faults and with the low amount of seismic data available. Since the probabilistic hazard method was initiated, most studies assume a Poissonian occurrence of earthquakes. Here we propose a method that enables the inclusion of time and space dependences between earthquakes into the probabilistic estimation of hazard. Combining the seismicity model Epidemic Type Aftershocks-Sequence (ETAS) with a Monte Carlo technique, aftershocks are naturally accounted for in the hazard determination. The method is applied to the Pyrenees region in Southern France. The impact on hazard of declustering and of the usual assumption that earthquakes occur according to a Poisson process is quantified, showing that aftershocks contribute on average less than 5 per cent to the probabilistic hazard, with an upper bound around 18 per cent
An important task of seismic hazard assessment consists of estimating the rate of seismic moment release which is correlated to the rate of tectonic deformation and the seismic coupling. However, the estimations of deformation depend on the type of information utilized (e.g. geodetic, geological, seismic) and include large uncertainties. We therefore estimate the deformation rate in the Lower Rhine Embayment (LRE), Germany, using an integrated approach where the uncertainties have been systematically incorporated. On the basis of a new homogeneous earthquake catalogue we initially determine the frequency-magnitude distribution by statistical methods. In particular, we focus on an adequate estimation of the upper bound of the Gutenberg-Richter relation and demonstrate the importance of additional palaeoseis- mological information. The integration of seismological and geological information yields a probability distribution of the upper bound magnitude. Using this distribution together with the distribution of Gutenberg-Richter a and b values, we perform Monte Carlo simulations to derive the seismic moment release as a function of the observation time. The seismic moment release estimated from synthetic earthquake catalogues with short catalogue length is found to systematically underestimate the long-term moment rate which can be analytically determined. The moment release recorded in the LRE over the last 250 yr is found to be in good agreement with the probability distribution resulting from the Monte Carlo simulations. Furthermore, the long-term distribution is within its uncertainties consistent with the moment rate derived by geological measurements, indicating an almost complete seismic coupling in this region. By means of Kostrov's formula, we additionally calculate the full deformation rate tensor using the distribution of known focal mechanisms in LRE. Finally, we use the same approach to calculate the seismic moment and the deformation rate for two subsets of the catalogue corresponding to the east- and west-dipping faults, respectively
The Coulomb failure stress (CFS) criterion is the most commonly used method for predicting spatial distributions of aftershocks following large earthquakes. However, large uncertainties are always associated with the calculation of Coulomb stress change. The uncertainties mainly arise due to nonunique slip inversions and unknown receiver faults; especially for the latter, results are highly dependent on the choice of the assumed receiver mechanism. Based on binary tests (aftershocks yes/no), recent studies suggest that alternative stress quantities, a distance-slip probabilistic model as well as deep neural network (DNN) approaches, all are superior to CFS with predefined receiver mechanism. To challenge this conclusion, which might have large implications, we use 289 slip inversions from SRCMOD database to calculate more realistic CFS values for a layered half-space and variable receiver mechanisms. We also analyze the effect of the magnitude cutoff, grid size variation, and aftershock duration to verify the use of receiver operating characteristic (ROC) analysis for the ranking of stress metrics. The observations suggest that introducing a layered half-space does not improve the stress maps and ROC curves. However, results significantly improve for larger aftershocks and shorter time periods but without changing the ranking. We also go beyond binary testing and apply alternative statistics to test the ability to estimate aftershock numbers, which confirm that simple stress metrics perform better than the classic Coulomb failure stress calculations and are also better than the distance-slip probabilistic model.
Introduction to special issue: Dynamics of seismicity patterns and earthquake triggering - Preface
(2006)
Reliable estimations of magnitude of completeness (M-c) are essential for a correct interpretation of seismic catalogues. The spatial distribution of M-c may be strongly variable and difficult to assess in mining environments, owing to the presence of galleries, cavities, fractured regions, porous media and different mineralogical bodies, as well as in consequence of inhomogeneous spatial distribution of the seismicity. We apply a 3-D modification of the probabilistic magnitude of completeness (PMC) method, which relies on the analysis of network detection capabilities. In our approach, the probability to detect an event depends on its magnitude, source receiver Euclidian distance and source receiver direction. The suggested method is proposed for study of the spatial distribution of the magnitude of completeness in a mining environment and here is applied to a 2-months acoustic emission (AE) data set recorded at the Morsleben salt mine, Germany. The dense seismic network and the large data set, which includes more than one million events, enable a detailed testing of the method. This method is proposed specifically for strongly heterogeneous media. Besides, it can also be used for specific network installations, with sensors with a sensitivity, dependent on the direction of the incoming wave (e.g. some piezoelectric sensors). In absence of strong heterogeneities, the standards PMC approach should be used. We show that the PMC estimations in mines strongly depend on the source receiver direction, and cannot be correctly accounted using a standard PMC approach. However, results can be improved, when adopting the proposed 3-D modification of the PMC method. Our analysis of one central horizontal and vertical section yields a magnitude of completeness of about M-c approximate to 1 (AE magnitude) at the centre of the network, which increases up to M-c approximate to 4 at further distances outside the network; the best detection performance is estimated for a NNE-SSE elongated region, which corresponds to the strike direction of the low-attenuating salt body. Our approach provides us with small-scale details about the capability of sensors to detect an earthquake, which can be linked to the presence of heterogeneities in specific directions. Reduced detection performance in presence of strong structural heterogeneities (cavities) is confirmed by synthetic waveform modelling in heterogeneous media.
Identification and characterization of growing large-scale en-echelon fractures in a salt mine
(2014)
The spatiotemporal seismicity of acoustic emission (AE) events recorded in the Morsleben salt mine is investigated. Almost a year after backfilling of the cavities from 2003, microevents are distributed with distinctive stripe shapes above cavities at different depth levels. The physical forces driving the creation of these stripes are still unknown. This study aims to find the active stripes and track fracture developments over time by combining two different temporal and spatial clustering techniques into a single methodological approach. Anomalous seismicity parameters values like sharp b-value changes for two active stripes are good indicators to explain possible stress accumulation at the stripe tips. We identify the formation of two new seismicity stripes and show that the AE activities in active clusters are migrated mostly unidirectional to eastward and upward. This indicates that the growth of underlying macrofractures is controlled by the gradient of extensional stress. Studying size distribution characteristic in terms of frequency-magnitude distribution and b-value in active phase and phase with constant seismicity rate show that deviations from the Gutenberg-Richter power law can be explained by the inclusion of different activity phases: (1) the inactive period before the formation of macrofractures, which is characterized by a deficit of larger events (higher b-values) and (2) the period of fracture growth characterized by the occurrence of larger events (smaller b-values).
The statistics of time delays between successive earthquakes has recently been claimed to be universal and to show the existence of clustering beyond the duration of aftershock bursts. We demonstrate that these claims are unjustified. Stochastic simulations with Poissonian background activity and triggered Omori-type aftershock sequences are shown to reproduce the interevent-time distributions observed on different spatial and magnitude scales in California. Thus the empirical distribution can be explained without any additional long-term clustering. Furthermore, we find that the shape of the interevent-time distribution, which can be approximated by the gamma distribution, is determined by the percentage of main-shocks in the catalog. This percentage can be calculated by the mean and variance of the interevent times and varies between 5% and 90% for different regions in California. Our investigation of stochastic simulations indicates that the interevent-time distribution provides a nonparametric reconstruction of the mainshock magnitude-frequency distribution that is superior to standard declustering algorithm
Geysers are hot springs whose frequency of water eruptions remain poorly understood. We set up a local broadband seismic network for 1 year at Strokkur geyser, Iceland, and developed an unprecedented catalog of 73,466 eruptions. We detected 50,135 single eruptions but find that the geyser is also characterized by sets of up to six eruptions in quick succession. The number of single to sextuple eruptions exponentially decreased, while the mean waiting time after an eruption linearly increased (3.7 to 16.4 min). While secondary eruptions within double to sextuple eruptions have a smaller mean seismic amplitude, the amplitude of the first eruption is comparable for all eruption types. We statistically model the eruption frequency assuming discharges proportional to the eruption multiplicity and a constant probability for subsequent events within a multituple eruption. The waiting time after an eruption is predictable but not the type or amplitude of the next one. <br /> Plain Language Summary Geysers are springs that often erupt in hot water fountains. They erupt more often than volcanoes but are quite similar. Nevertheless, it is poorly understood how often volcanoes and also geysers erupt. We created a list of 73,466 eruption times of Strokkur geyser, Iceland, from 1 year of seismic data. The geyser erupted one to six times in quick succession. We found 50,135 single eruptions but only 1 sextuple eruption, while the mean waiting time increased from 3.7 min after single eruptions to 16.4 min after sextuple eruptions. Mean amplitudes of each eruption type were higher for single eruptions, but all first eruptions in a succession were similar in height. Assuming a constant heat inflow at depth, we can predict the waiting time after an eruption but not the type or amplitude of the next one.
A volcanic eruption is usually preceded by seismic precursors, but their interpretation and use for forecasting the eruption onset time remain a challenge. A part of the eruptive processes in open conduits of volcanoes may be similar to those encountered in geysers. Since geysers erupt more often, they are useful sites for testing new forecasting methods. We tested the application of Permutation Entropy (PE) as a robust method to assess the complexity in seismic recordings of the Strokkur geyser, Iceland. Strokkur features several minute-long eruptive cycles, enabling us to verify in 63 recorded cycles whether PE behaves consistently from one eruption to the next one. We performed synthetic tests to understand the effect of different parameter settings in the PE calculation. Our application to Strokkur shows a distinct, repeating PE pattern consistent with previously identified phases in the eruptive cycle. We find a systematic increase in PE within the last 15 s before the eruption, indicating that an eruption will occur. We quantified the predictive power of PE, showing that PE performs better than seismic signal strength or quiescence when it comes to forecasting eruptions.
Earthquake faults interact with each other in many different ways and hence earthquakes cannot be treated as individual independent events. Although earthquake interactions generally lead to a complex evolution of the crustal stress field, it does not necessarily mean that the earthquake occurrence becomes random and completely unpredictable. In particular, the interplay between earthquakes can rather explain the occurrence of pronounced characteristics such as periods of accelerated and depressed seismicity (seismic quiescence) as well as spatiotemporal earthquake clustering (swarms and aftershock sequences). Ignoring the time-dependence of the process by looking at time-averaged values – as largely done in standard procedures of seismic hazard assessment – can thus lead to erroneous estimations not only of the activity level of future earthquakes but also of their spatial distribution. Therefore, it exists an urgent need for applicable time-dependent models. In my work, I aimed at better understanding and characterization of the earthquake interactions in order to improve seismic hazard estimations. For this purpose, I studied seismicity patterns on spatial scales ranging from hydraulic fracture experiments (meter to kilometer) to fault system size (hundreds of kilometers), while the temporal scale of interest varied from the immediate aftershock activity (minutes to months) to seismic cycles (tens to thousands of years). My studies revealed a number of new characteristics of fluid-induced and stress-triggered earthquake clustering as well as precursory phenomena in earthquake cycles. Data analysis of earthquake and deformation data were accompanied by statistical and physics-based model simulations which allow a better understanding of the role of structural heterogeneities, stress changes, afterslip and fluid flow. Finally, new strategies and methods have been developed and tested which help to improve seismic hazard estimations by taking the time-dependence of the earthquake process appropriately into account.
Earthquakes occurring close to hydrocarbon fields under production are often under critical view of being induced or triggered. However, clear and testable rules to discriminate the different events have rarely been developed and tested. The unresolved scientific problem may lead to lengthy public disputes with unpredictable impact on the local acceptance of the exploitation and field operations. We propose a quantitative approach to discriminate induced, triggered, and natural earthquakes, which is based on testable input parameters. Maxima of occurrence probabilities are compared for the cases under question, and a single probability of being triggered or induced is reported. The uncertainties of earthquake location and other input parameters are considered in terms of the integration over probability density functions. The probability that events have been human triggered/induced is derived from the modeling of Coulomb stress changes and a rate and state-dependent seismicity model. In our case a 3-D boundary element method has been adapted for the nuclei of strain approach to estimate the stress changes outside the reservoir, which are related to pore pressure changes in the field formation. The predicted rate of natural earthquakes is either derived from the background seismicity or, in case of rare events, from an estimate of the tectonic stress rate. Instrumentally derived seismological information on the event location, source mechanism, and the size of the rupture plane is of advantage for the method. If the rupture plane has been estimated, the discrimination between induced or only triggered events is theoretically possible if probability functions are convolved with a rupture fault filter. We apply the approach to three recent main shock events: (1) the M-w 4.3 Ekofisk 2001, North Sea, earthquake close to the Ekofisk oil field; (2) the M-w 4.4 Rotenburg 2004, Northern Germany, earthquake in the vicinity of the Sohlingen gas field; and (3) the M-w 6.1 Emilia 2012, Northern Italy, earthquake in the vicinity of a hydrocarbon reservoir. The three test cases cover the complete range of possible causes: clearly human induced, not even human triggered, and a third case in between both extremes.
The Gutenberg-Richter relation for earthquake magnitudes is the most famous empirical law in seismology. It states that the frequency of earthquake magnitudes follows an exponential distribution; this has been found to be a robust feature of seismicity above the completeness magnitude, and it is independent of whether global, regional, or local seismicity is analyzed. However, the exponent b of the distribution varies significantly in space and time, which is important for process understanding and seismic hazard assessment; this is particularly true because of the fact that the Gutenberg-Richter b-value acts as a proxy for the stress state and quantifies the ratio of large-to-small earthquakes. In our work, we focus on the automatic detection of statistically significant temporal changes of the b-value in seismicity data. In our approach, we use Bayes factors for model selection and estimate multiple change-points of the frequency-magnitude distribution in time. The method is first applied to synthetic data, showing its capability to detect change-points as function of the size of the sample and the b-value contrast. Finally, we apply this approach to examples of observational data sets for which b-value changes have previously been stated. Our analysis of foreshock and after-shock sequences related to mainshocks, as well as earthquake swarms, shows that only a portion of the b-value changes is statistically significant.
We study changes in effective stress (normal stress minus pore pressure) that occurred in the French Alps during the 2003-2004 Ubaye earthquake swarm. Two complementary data sets are used. First, a set of 974 relocated events allows us to finely characterize the shape of the seismogenic area and the spatial migration of seismicity during the crisis. Relocations are performed by a double-difference algorithm. We compute differences in travel times at stations both from absolute picking times and from cross-correlation delays of multiplets. The resulting catalog reveals a swarm alignment along a single planar structure striking N130 degrees E and dipping 80 degrees W. This relocated activity displays migration properties consistent with a triggering by a diffusive fluid overpressure front. This observation argues in favor of a deep-seated fluid circulation responsible for a significant part of the seismic activity in Ubaye. Second, we analyze time series of earthquake detections at a single seismological station located just above the swarm. This time series forms a dense chronicle of +16,000 events. We use it to estimate the history of effective stress changes during this sequence. For this purpose we model the rate of events by a stochastic epidemic-type aftershock sequence model with a nonstationary background seismic rate lambda(0)(t). This background rate is estimated in discrete time windows. Window lengths are determined optimally according to a new change-point method on the basis of the interevent times distribution. We propose that background events are triggered directly by a transient fluid circulation at depth. Then, using rate-and-state constitutive friction laws, we estimate changes in effective stress for the observed rate of background events. We assume that changes in effective stress occurred under constant shear stressing rate conditions. We finally obtain a maximum change in effective stress close to -8 MPa, which corresponds to a maximum fluid overpressure of about 8 MPa under constant normal stress conditions. This estimate is in good agreement with values obtained from numerical modeling of fluid flow at depth, or with direct measurements reported from fluid injection experiments.
Due to large uncertainties and non-uniqueness in fault slip inversion, the investigation of stress coupling based on the direct comparison of independent slip inversions, for example, between the coseismic slip distribution and the interseismic slip deficit, may lead to ambiguous conclusions. In this study, we therefore adopt the stress-constrained joint inversion in the Bayesian approach of Wang et al., and implement the physical hypothesis of stress coupling as a prior. We test the hypothesis that interseismic locking is coupled with the coseismic rupture, and the early post-seismic deformation is a stress relaxation process in response to the coseismic stress perturbation. We characterize the role of stress coupling in the seismic cycle by evaluating the efficiency of the model to explain the available data. Taking the 2004 M6 Parkfield earthquake as a study case, we find that the stress coupling hypothesis is in agreement with the data. The coseismic rupture zone is found to be strongly locked during the interseismic phase and the post-seismic slip zone is indicated to be weakly creeping. The post-seismic deformation plays an important role to rebuild stress in the coseismic rupture zone. Based on our results for the stress accumulation during both inter- and post-seismic phase in the coseismic rupture zone, together with the coseismic stress drop, we estimate a recurrence time of M6 earthquake in Parkfield around 23-41 yr, suggesting that the duration of 38 yr between the two recent M6 events in Parkfield is not a surprise.
Aseismic transient driving the swarm-like seismic sequence in the Pollino range, Southern Italy
(2015)
Tectonic earthquake swarms challenge our understanding of earthquake processes since it is difficult to link observations to the underlying physical mechanisms and to assess the hazard they pose. Transient forcing is thought to initiate and drive the spatio-temporal release of energy during swarms. The nature of the transient forcing may vary across sequences and range from aseismic creeping or transient slip to diffusion of pore pressure pulses to fluid redistribution and migration within the seismogenic crust. Distinguishing between such forcing mechanisms may be critical to reduce epistemic uncertainties in the assessment of hazard due to seismic swarms, because it can provide information on the frequency-magnitude distribution of the earthquakes (often deviating from the assumed Gutenberg-Richter relation) and on the expected source parameters influencing the ground motion (for example the stress drop). Here we study the ongoing Pollino range (Southern Italy) seismic swarm, a long-lasting seismic sequence with more than five thousand events recorded and located since October 2010. The two largest shocks (magnitude M-w = 4.2 and M-w = 5.1) are among the largest earthquakes ever recorded in an area which represents a seismic gap in the Italian historical earthquake catalogue. We investigate the geometrical, mechanical and statistical characteristics of the largest earthquakes and of the entire swarm. We calculate the focal mechanisms of the M-l > 3 events in the sequence and the transfer of Coulomb stress on nearby known faults and analyse the statistics of the earthquake catalogue. We find that only 25 per cent of the earthquakes in the sequence can be explained as aftershocks, and the remaining 75 per cent may be attributed to a transient forcing. The b-values change in time throughout the sequence, with low b-values correlated with the period of highest rate of activity and with the occurrence of the largest shock. In the light of recent studies on the palaeoseismic and historical activity in the Pollino area, we identify two scenarios consistent with the observations and our analysis: This and past seismic swarms may have been 'passive' features, with small fault patches failing on largely locked faults, or may have been accompanied by an 'active', largely aseismic, release of a large portion of the accumulated tectonic strain. Those scenarios have very different implications for the seismic hazard of the area.
A review of source models to further the understanding of the seismicity of the Groningen field
(2022)
The occurrence of felt earthquakes due to gas production in Groningen has initiated numerous studies and model attempts to understand and quantify induced seismicity in this region. The whole bandwidth of available models spans the range from fully deterministic models to purely empirical and stochastic models. In this article, we summarise the most important model approaches, describing their main achievements and limitations. In addition, we discuss remaining open questions and potential future directions of development.
The Seismic Hazard Inferred from Tectonics based on the Global Strain Rate Map (SHIFT_GSRM) earthquake forecast was designed to provide high-resolution estimates of global shallow seismicity to be used in seismic hazard assessment. This model combines geodetic strain rates with global earthquake parameters to characterize long-term rates of seismic moment and earthquake activity. Although SHIFT_GSRM properly computes seismicity rates in seismically active continental regions, it underestimates earthquake rates in subduction zones by an average factor of approximately 3. We present a complementary method to SHIFT_GSRM to more accurately forecast earthquake rates in 37 subduction segments, based on the conservation of moment principle and the use of regional interface seismicity parameters, such as subduction dip angles, corner magnitudes, and coupled seismogenic thicknesses. In seven progressive steps, we find that SHIFT_GSRM earthquake-rate underpredictions are mainly due to the utilization of a global probability function of seismic moment release that poorly captures the great variability among subduction megathrust interfaces. Retrospective test results show that the forecast is consistent with the observations during the 1 January 1977 to 31 December 2014 period. Moreover, successful pseudoprospective evaluations for the 1 January 2015 to 31 December 2018 period demonstrate the power of the regionalized earthquake model to properly estimate subduction-zone seismicity.