Refine
Year of publication
Document Type
- Article (47)
- Other (2)
- Conference Proceeding (1)
- Doctoral Thesis (1)
- Habilitation Thesis (1)
- Postprint (1)
- Review (1)
Keywords
- Statistical seismology (5)
- Seismicity and tectonics (3)
- seismic hazard (3)
- Bayesian (2)
- Dynamics: seismotectonics (2)
- earthquake interaction (2)
- forecasting (2)
- induced seismicity (2)
- seismicity (2)
- statistical seismology (2)
Seismicity models are probabilistic forecasts of earthquake rates to support seismic hazard assessment.
Physics-based models allow extrapolating previously unsampled parameter ranges and enable conclusions on underlying tectonic or human-induced processes.
The Coulomb Failure (CF) and the rate-and-state (RS) models are two widely used physics-based seismicity models both assuming pre-existing populations of faults responding to Coulomb stress changes.
The CF model depends on the absolute Coulomb stress and assumes instantaneous triggering if stress exceeds a threshold, while the RS model only depends on stress changes.
Both models can predict background earthquake rates and time-dependent stress effects, but the RS model with its three independent parameters can additionally explain delayed aftershock triggering.
This study introduces a modified CF model where the instantaneous triggering is replaced by a mean time-to-failure depending on the absolute stress value.
For the specific choice of an exponential dependence on stress and a stationary initial seismicity rate, we show that the model leads to identical results as the RS model and reproduces the Omori-Utsu relation for aftershock decays as well stress-shadowing effects.
Thus, both CF and RS models can be seen as special cases of the new model. However, the new stress response model can also account for subcritical initial stress conditions and alternative functions of the mean time-to-failure depending on the problem and fracture mode.
The Seismic Hazard Inferred from Tectonics based on the Global Strain Rate Map (SHIFT_GSRM) earthquake forecast was designed to provide high-resolution estimates of global shallow seismicity to be used in seismic hazard assessment. This model combines geodetic strain rates with global earthquake parameters to characterize long-term rates of seismic moment and earthquake activity. Although SHIFT_GSRM properly computes seismicity rates in seismically active continental regions, it underestimates earthquake rates in subduction zones by an average factor of approximately 3. We present a complementary method to SHIFT_GSRM to more accurately forecast earthquake rates in 37 subduction segments, based on the conservation of moment principle and the use of regional interface seismicity parameters, such as subduction dip angles, corner magnitudes, and coupled seismogenic thicknesses. In seven progressive steps, we find that SHIFT_GSRM earthquake-rate underpredictions are mainly due to the utilization of a global probability function of seismic moment release that poorly captures the great variability among subduction megathrust interfaces. Retrospective test results show that the forecast is consistent with the observations during the 1 January 1977 to 31 December 2014 period. Moreover, successful pseudoprospective evaluations for the 1 January 2015 to 31 December 2018 period demonstrate the power of the regionalized earthquake model to properly estimate subduction-zone seismicity.
A review of source models to further the understanding of the seismicity of the Groningen field
(2022)
The occurrence of felt earthquakes due to gas production in Groningen has initiated numerous studies and model attempts to understand and quantify induced seismicity in this region. The whole bandwidth of available models spans the range from fully deterministic models to purely empirical and stochastic models. In this article, we summarise the most important model approaches, describing their main achievements and limitations. In addition, we discuss remaining open questions and potential future directions of development.
We show that realistic aftershock sequences with space-time characteristics compatible with observations are generated by a model consisting of brittle fault segments separated by creeping zones. The dynamics of the brittle regions is governed by static/kinetic friction, 3D elastic stress transfer and small creep deformation. The creeping parts are characterized by high ongoing creep velocities. These regions store stress during earthquake failures and then release it in the interseismic periods. The resulting postseismic deformation leads to aftershock sequences following the modified Omori law. The ratio of creep coefficients in the brittle and creeping sections determines the duration of the postseismic transients and the exponent p of the modified Omori law
Aseismic transient driving the swarm-like seismic sequence in the Pollino range, Southern Italy
(2015)
Tectonic earthquake swarms challenge our understanding of earthquake processes since it is difficult to link observations to the underlying physical mechanisms and to assess the hazard they pose. Transient forcing is thought to initiate and drive the spatio-temporal release of energy during swarms. The nature of the transient forcing may vary across sequences and range from aseismic creeping or transient slip to diffusion of pore pressure pulses to fluid redistribution and migration within the seismogenic crust. Distinguishing between such forcing mechanisms may be critical to reduce epistemic uncertainties in the assessment of hazard due to seismic swarms, because it can provide information on the frequency-magnitude distribution of the earthquakes (often deviating from the assumed Gutenberg-Richter relation) and on the expected source parameters influencing the ground motion (for example the stress drop). Here we study the ongoing Pollino range (Southern Italy) seismic swarm, a long-lasting seismic sequence with more than five thousand events recorded and located since October 2010. The two largest shocks (magnitude M-w = 4.2 and M-w = 5.1) are among the largest earthquakes ever recorded in an area which represents a seismic gap in the Italian historical earthquake catalogue. We investigate the geometrical, mechanical and statistical characteristics of the largest earthquakes and of the entire swarm. We calculate the focal mechanisms of the M-l > 3 events in the sequence and the transfer of Coulomb stress on nearby known faults and analyse the statistics of the earthquake catalogue. We find that only 25 per cent of the earthquakes in the sequence can be explained as aftershocks, and the remaining 75 per cent may be attributed to a transient forcing. The b-values change in time throughout the sequence, with low b-values correlated with the period of highest rate of activity and with the occurrence of the largest shock. In the light of recent studies on the palaeoseismic and historical activity in the Pollino area, we identify two scenarios consistent with the observations and our analysis: This and past seismic swarms may have been 'passive' features, with small fault patches failing on largely locked faults, or may have been accompanied by an 'active', largely aseismic, release of a large portion of the accumulated tectonic strain. Those scenarios have very different implications for the seismic hazard of the area.
Due to large uncertainties and non-uniqueness in fault slip inversion, the investigation of stress coupling based on the direct comparison of independent slip inversions, for example, between the coseismic slip distribution and the interseismic slip deficit, may lead to ambiguous conclusions. In this study, we therefore adopt the stress-constrained joint inversion in the Bayesian approach of Wang et al., and implement the physical hypothesis of stress coupling as a prior. We test the hypothesis that interseismic locking is coupled with the coseismic rupture, and the early post-seismic deformation is a stress relaxation process in response to the coseismic stress perturbation. We characterize the role of stress coupling in the seismic cycle by evaluating the efficiency of the model to explain the available data. Taking the 2004 M6 Parkfield earthquake as a study case, we find that the stress coupling hypothesis is in agreement with the data. The coseismic rupture zone is found to be strongly locked during the interseismic phase and the post-seismic slip zone is indicated to be weakly creeping. The post-seismic deformation plays an important role to rebuild stress in the coseismic rupture zone. Based on our results for the stress accumulation during both inter- and post-seismic phase in the coseismic rupture zone, together with the coseismic stress drop, we estimate a recurrence time of M6 earthquake in Parkfield around 23-41 yr, suggesting that the duration of 38 yr between the two recent M6 events in Parkfield is not a surprise.
We study changes in effective stress (normal stress minus pore pressure) that occurred in the French Alps during the 2003-2004 Ubaye earthquake swarm. Two complementary data sets are used. First, a set of 974 relocated events allows us to finely characterize the shape of the seismogenic area and the spatial migration of seismicity during the crisis. Relocations are performed by a double-difference algorithm. We compute differences in travel times at stations both from absolute picking times and from cross-correlation delays of multiplets. The resulting catalog reveals a swarm alignment along a single planar structure striking N130 degrees E and dipping 80 degrees W. This relocated activity displays migration properties consistent with a triggering by a diffusive fluid overpressure front. This observation argues in favor of a deep-seated fluid circulation responsible for a significant part of the seismic activity in Ubaye. Second, we analyze time series of earthquake detections at a single seismological station located just above the swarm. This time series forms a dense chronicle of +16,000 events. We use it to estimate the history of effective stress changes during this sequence. For this purpose we model the rate of events by a stochastic epidemic-type aftershock sequence model with a nonstationary background seismic rate lambda(0)(t). This background rate is estimated in discrete time windows. Window lengths are determined optimally according to a new change-point method on the basis of the interevent times distribution. We propose that background events are triggered directly by a transient fluid circulation at depth. Then, using rate-and-state constitutive friction laws, we estimate changes in effective stress for the observed rate of background events. We assume that changes in effective stress occurred under constant shear stressing rate conditions. We finally obtain a maximum change in effective stress close to -8 MPa, which corresponds to a maximum fluid overpressure of about 8 MPa under constant normal stress conditions. This estimate is in good agreement with values obtained from numerical modeling of fluid flow at depth, or with direct measurements reported from fluid injection experiments.
Based on an analysis of continuous monitoring of farm animal behavior in the region of the 2016 M6.6 Norcia earthquake in Italy, Wikelski et al., 2020; (Seismol Res Lett, 89, 2020, 1238) conclude that animal activity can be anticipated with subsequent seismic activity and that this finding might help to design a "short-term earthquake forecasting method." We show that this result is based on an incomplete analysis and misleading interpretations. Applying state-of-the-art methods of statistics, we demonstrate that the proposed anticipatory patterns cannot be distinguished from random patterns, and consequently, the observed anomalies in animal activity do not have any forecasting power.
Time-dependent probabilistic seismic hazard assessment requires a stochastic description of earthquake occurrences. While short-term seismicity models are well-constrained by observations, the recurrences of characteristic on-fault earthquakes are only derived from theoretical considerations, uncertain palaeo-events or proxy data. Despite the involved uncertainties and complexity, simple statistical models for a quasi-period recurrence of on-fault events are implemented in seismic hazard assessments. To test the applicability of statistical models, such as the Brownian relaxation oscillator or the stress release model, we perform a systematic comparison with deterministic simulations based on rate- and state-dependent friction, high-resolution representations of fault systems and quasi-dynamic rupture propagation. For the specific fault network of the Lower Rhine Embayment, Germany, we run both stochastic and deterministic model simulations based on the same fault geometries and stress interactions. Our results indicate that the stochastic simulators are able to reproduce the first-order characteristics of the major earthquakes on isolated faults as well as for coupled faults with moderate stress interactions. However, we find that all tested statistical models fail to reproduce the characteristics of strongly coupled faults, because multisegment rupturing resulting from a spatiotemporally correlated stress field is underestimated in the stochastic simulators. Our results suggest that stochastic models have to be extended by multirupture probability distributions to provide more reliable results.
Understanding and constraining the source of geodetic deformation in volcanic areas is an important component of hazard assessment. Here, we analyse deformation and seismicity for one year before the March 2021 Fagradalsfjall eruption in Iceland. We generate a high-resolution catalogue of 39,500 earthquakes using optical cable recordings and develop a poroelastic model to describe three pre-eruptional uplift and subsidence cycles at the Svartsengi geothermal field, 8 km west of the eruption site. We find the observed deformation is best explained by cyclic intrusions into a permeable aquifer by a fluid injected at 4 km depth below the geothermal field, with a total volume of 0.11 ± 0.05 km3 and a density of 850 ± 350 kg m–3. We therefore suggest that ingression of magmatic CO2 can explain the geodetic, gravity and seismic data, although some contribution of magma cannot be excluded.
[1] According to the well-known Coulomb failure criterion the variation of either stress or pore pressure can result in earthquake rupture. Aftershock sequences characterized by the Omori law are often assumed to be the consequence of varying stress, whereas earthquake swarms are thought to be triggered by fluid intrusions. The role of stress triggering can be analyzed by modeling solely three-dimensional (3-D) elastic stress changes in the crust, but fluid flows which initiate seismicity cannot be investigated without considering complex seismicity patterns resulting from both pore pressure variations and earthquake-connected stress field changes. We show that the epidemic-type aftershock sequence (ETAS) model is an appropriate tool to extract the primary fluid signal from such complex seismicity patterns. We analyze a large earthquake swarm that occurred in 2000 in Vogtland/NW Bohemia, central Europe. By fitting the stochastic ETAS model, we find that stress triggering is dominant in creating the observed seismicity patterns and explains the observed fractal interevent time distribution. External forcing, identified with pore pressure changes due to fluid intrusion, is found to directly trigger only a few percent of the total activity. However, temporal deconvolution indicates that a pronounced fluid signal initiated the swarm. These results are confirmed by our analogous investigation of model simulations in which earthquakes are triggered by fluid intrusion as well as stress transfers on a fault plane embedded in a 3-D elastic half-space. The deconvolution procedure based on the ETAS model is able to reveal the underlying pore pressure variations
The Gutenberg-Richter relation for earthquake magnitudes is the most famous empirical law in seismology. It states that the frequency of earthquake magnitudes follows an exponential distribution; this has been found to be a robust feature of seismicity above the completeness magnitude, and it is independent of whether global, regional, or local seismicity is analyzed. However, the exponent b of the distribution varies significantly in space and time, which is important for process understanding and seismic hazard assessment; this is particularly true because of the fact that the Gutenberg-Richter b-value acts as a proxy for the stress state and quantifies the ratio of large-to-small earthquakes. In our work, we focus on the automatic detection of statistically significant temporal changes of the b-value in seismicity data. In our approach, we use Bayes factors for model selection and estimate multiple change-points of the frequency-magnitude distribution in time. The method is first applied to synthetic data, showing its capability to detect change-points as function of the size of the sample and the b-value contrast. Finally, we apply this approach to examples of observational data sets for which b-value changes have previously been stated. Our analysis of foreshock and after-shock sequences related to mainshocks, as well as earthquake swarms, shows that only a portion of the b-value changes is statistically significant.
Earthquakes occurring close to hydrocarbon fields under production are often under critical view of being induced or triggered. However, clear and testable rules to discriminate the different events have rarely been developed and tested. The unresolved scientific problem may lead to lengthy public disputes with unpredictable impact on the local acceptance of the exploitation and field operations. We propose a quantitative approach to discriminate induced, triggered, and natural earthquakes, which is based on testable input parameters. Maxima of occurrence probabilities are compared for the cases under question, and a single probability of being triggered or induced is reported. The uncertainties of earthquake location and other input parameters are considered in terms of the integration over probability density functions. The probability that events have been human triggered/induced is derived from the modeling of Coulomb stress changes and a rate and state-dependent seismicity model. In our case a 3-D boundary element method has been adapted for the nuclei of strain approach to estimate the stress changes outside the reservoir, which are related to pore pressure changes in the field formation. The predicted rate of natural earthquakes is either derived from the background seismicity or, in case of rare events, from an estimate of the tectonic stress rate. Instrumentally derived seismological information on the event location, source mechanism, and the size of the rupture plane is of advantage for the method. If the rupture plane has been estimated, the discrimination between induced or only triggered events is theoretically possible if probability functions are convolved with a rupture fault filter. We apply the approach to three recent main shock events: (1) the M-w 4.3 Ekofisk 2001, North Sea, earthquake close to the Ekofisk oil field; (2) the M-w 4.4 Rotenburg 2004, Northern Germany, earthquake in the vicinity of the Sohlingen gas field; and (3) the M-w 6.1 Emilia 2012, Northern Italy, earthquake in the vicinity of a hydrocarbon reservoir. The three test cases cover the complete range of possible causes: clearly human induced, not even human triggered, and a third case in between both extremes.
Earthquake faults interact with each other in many different ways and hence earthquakes cannot be treated as individual independent events. Although earthquake interactions generally lead to a complex evolution of the crustal stress field, it does not necessarily mean that the earthquake occurrence becomes random and completely unpredictable. In particular, the interplay between earthquakes can rather explain the occurrence of pronounced characteristics such as periods of accelerated and depressed seismicity (seismic quiescence) as well as spatiotemporal earthquake clustering (swarms and aftershock sequences). Ignoring the time-dependence of the process by looking at time-averaged values – as largely done in standard procedures of seismic hazard assessment – can thus lead to erroneous estimations not only of the activity level of future earthquakes but also of their spatial distribution. Therefore, it exists an urgent need for applicable time-dependent models. In my work, I aimed at better understanding and characterization of the earthquake interactions in order to improve seismic hazard estimations. For this purpose, I studied seismicity patterns on spatial scales ranging from hydraulic fracture experiments (meter to kilometer) to fault system size (hundreds of kilometers), while the temporal scale of interest varied from the immediate aftershock activity (minutes to months) to seismic cycles (tens to thousands of years). My studies revealed a number of new characteristics of fluid-induced and stress-triggered earthquake clustering as well as precursory phenomena in earthquake cycles. Data analysis of earthquake and deformation data were accompanied by statistical and physics-based model simulations which allow a better understanding of the role of structural heterogeneities, stress changes, afterslip and fluid flow. Finally, new strategies and methods have been developed and tested which help to improve seismic hazard estimations by taking the time-dependence of the earthquake process appropriately into account.
Diese Arbeit beschäftigt sich mit der Annahme, dass den Erdbeben ein selbstorganisiert kritischer Zustand der Erdkruste zugrunde liegt. Mit Hilfe einer Erweiterung bisheriger Modelle wird gezeigt, dass ein solcher Zustand nicht nur für die Grössenverteilung der Erdbeben (Gutenberg-Richter Gesetz), sondern auch für das beobachtete raumzeitliche Auftreten, z.B. für das Omori-Gesetz für Nachbebenserien, verantwortlich sein kann. Desweiteren wird die Frage nach der Vorhersagbarkeit grosser Erdbeben in solchen Modellsimulationen untersucht.
A volcanic eruption is usually preceded by seismic precursors, but their interpretation and use for forecasting the eruption onset time remain a challenge. A part of the eruptive processes in open conduits of volcanoes may be similar to those encountered in geysers. Since geysers erupt more often, they are useful sites for testing new forecasting methods. We tested the application of Permutation Entropy (PE) as a robust method to assess the complexity in seismic recordings of the Strokkur geyser, Iceland. Strokkur features several minute-long eruptive cycles, enabling us to verify in 63 recorded cycles whether PE behaves consistently from one eruption to the next one. We performed synthetic tests to understand the effect of different parameter settings in the PE calculation. Our application to Strokkur shows a distinct, repeating PE pattern consistent with previously identified phases in the eruptive cycle. We find a systematic increase in PE within the last 15 s before the eruption, indicating that an eruption will occur. We quantified the predictive power of PE, showing that PE performs better than seismic signal strength or quiescence when it comes to forecasting eruptions.
Geysers are hot springs whose frequency of water eruptions remain poorly understood. We set up a local broadband seismic network for 1 year at Strokkur geyser, Iceland, and developed an unprecedented catalog of 73,466 eruptions. We detected 50,135 single eruptions but find that the geyser is also characterized by sets of up to six eruptions in quick succession. The number of single to sextuple eruptions exponentially decreased, while the mean waiting time after an eruption linearly increased (3.7 to 16.4 min). While secondary eruptions within double to sextuple eruptions have a smaller mean seismic amplitude, the amplitude of the first eruption is comparable for all eruption types. We statistically model the eruption frequency assuming discharges proportional to the eruption multiplicity and a constant probability for subsequent events within a multituple eruption. The waiting time after an eruption is predictable but not the type or amplitude of the next one. <br /> Plain Language Summary Geysers are springs that often erupt in hot water fountains. They erupt more often than volcanoes but are quite similar. Nevertheless, it is poorly understood how often volcanoes and also geysers erupt. We created a list of 73,466 eruption times of Strokkur geyser, Iceland, from 1 year of seismic data. The geyser erupted one to six times in quick succession. We found 50,135 single eruptions but only 1 sextuple eruption, while the mean waiting time increased from 3.7 min after single eruptions to 16.4 min after sextuple eruptions. Mean amplitudes of each eruption type were higher for single eruptions, but all first eruptions in a succession were similar in height. Assuming a constant heat inflow at depth, we can predict the waiting time after an eruption but not the type or amplitude of the next one.
The statistics of time delays between successive earthquakes has recently been claimed to be universal and to show the existence of clustering beyond the duration of aftershock bursts. We demonstrate that these claims are unjustified. Stochastic simulations with Poissonian background activity and triggered Omori-type aftershock sequences are shown to reproduce the interevent-time distributions observed on different spatial and magnitude scales in California. Thus the empirical distribution can be explained without any additional long-term clustering. Furthermore, we find that the shape of the interevent-time distribution, which can be approximated by the gamma distribution, is determined by the percentage of main-shocks in the catalog. This percentage can be calculated by the mean and variance of the interevent times and varies between 5% and 90% for different regions in California. Our investigation of stochastic simulations indicates that the interevent-time distribution provides a nonparametric reconstruction of the mainshock magnitude-frequency distribution that is superior to standard declustering algorithm
We discuss to what extent a given earthquake catalog and the assumption of a doubly truncated Gutenberg-Richter distribution for the earthquake magnitudes allow for the calculation of confidence intervals for the maximum possible magnitude M. We show that, without further assumptions such as the existence of an upper bound of M, only very limited information may be obtained. In a frequentist formulation, for each confidence level alpha the confidence interval diverges with finite probability. In a Bayesian formulation, the posterior distribution of the upper magnitude is not normalizable. We conclude that the common approach to derive confidence intervals from the variance of a point estimator fails. Technically, this problem can be overcome by introducing an upper bound (M) over tilde for the maximum magnitude. Then the Bayesian posterior distribution can be normalized, and its variance decreases with the number of observed events. However, because the posterior depends significantly on the choice of the unknown value of (M) over tilde, the resulting confidence intervals are essentially meaningless. The use of an informative prior distribution accounting for pre-knowledge of M is also of little use, because the prior is only modified in the case of the occurrence of an extreme event. Our results suggest that the maximum possible magnitude M should be better replaced by M(T), the maximum expected magnitude in a given time interval T, for which the calculation of exact confidence intervals becomes straightforward. From a physical point of view, numerical models of the earthquake process adjusted to specific fault regions may be a powerful alternative to overcome the shortcomings of purely statistical inference.