Refine
Has Fulltext
- no (41)
Year of publication
Document Type
- Article (41) (remove)
Language
- English (41)
Is part of the Bibliography
- yes (41)
Keywords
The Seismic Hazard Inferred from Tectonics based on the Global Strain Rate Map (SHIFT_GSRM) earthquake forecast was designed to provide high-resolution estimates of global shallow seismicity to be used in seismic hazard assessment. This model combines geodetic strain rates with global earthquake parameters to characterize long-term rates of seismic moment and earthquake activity. Although SHIFT_GSRM properly computes seismicity rates in seismically active continental regions, it underestimates earthquake rates in subduction zones by an average factor of approximately 3. We present a complementary method to SHIFT_GSRM to more accurately forecast earthquake rates in 37 subduction segments, based on the conservation of moment principle and the use of regional interface seismicity parameters, such as subduction dip angles, corner magnitudes, and coupled seismogenic thicknesses. In seven progressive steps, we find that SHIFT_GSRM earthquake-rate underpredictions are mainly due to the utilization of a global probability function of seismic moment release that poorly captures the great variability among subduction megathrust interfaces. Retrospective test results show that the forecast is consistent with the observations during the 1 January 1977 to 31 December 2014 period. Moreover, successful pseudoprospective evaluations for the 1 January 2015 to 31 December 2018 period demonstrate the power of the regionalized earthquake model to properly estimate subduction-zone seismicity.
The first step in the estimation of probabilistic seismic hazard in a region commonly consists of the definition and characterization of the relevant seismic sources. Because in low-seismicity regions seismicity is often rather diffuse and faults are difficult to identify, large areal source zones are mostly used. The corresponding hypothesis is that seismicity is uniformly distributed inside each areal seismic source zone. In this study, the impact of this hypothesis on the probabilistic hazard estimation is quantified through the generation of synthetic spatial seismicity distributions. Fractal seismicity distributions are generated inside a given source zone and probabilistic hazard is computed for a set of sites located inside this zone. In our study, the impact of the spatial seismicity distribution is defined as the deviation from the hazard value obtained for a spatially uniform seismicity distribution. From the generation of a large number of synthetic distributions, the correlation between the fractal dimension D and the impact is derived. The results show that the assumption of spatially uniform seismicity tends to bias the hazard to higher values. The correlation can be used to determine the systematic biases and uncertainties for hazard estimations in real cases, where the fractal dimension has been determined. We apply the technique in Germany (Cologne area) and in France (Alps).
In low-seismicity regions, such as France or Germany, the estimation of probabilistic seismic hazard must cope with the difficult identification of active faults and with the low amount of seismic data available. Since the probabilistic hazard method was initiated, most studies assume a Poissonian occurrence of earthquakes. Here we propose a method that enables the inclusion of time and space dependences between earthquakes into the probabilistic estimation of hazard. Combining the seismicity model Epidemic Type Aftershocks-Sequence (ETAS) with a Monte Carlo technique, aftershocks are naturally accounted for in the hazard determination. The method is applied to the Pyrenees region in Southern France. The impact on hazard of declustering and of the usual assumption that earthquakes occur according to a Poisson process is quantified, showing that aftershocks contribute on average less than 5 per cent to the probabilistic hazard, with an upper bound around 18 per cent
Earthquakes occurring close to hydrocarbon fields under production are often under critical view of being induced or triggered. However, clear and testable rules to discriminate the different events have rarely been developed and tested. The unresolved scientific problem may lead to lengthy public disputes with unpredictable impact on the local acceptance of the exploitation and field operations. We propose a quantitative approach to discriminate induced, triggered, and natural earthquakes, which is based on testable input parameters. Maxima of occurrence probabilities are compared for the cases under question, and a single probability of being triggered or induced is reported. The uncertainties of earthquake location and other input parameters are considered in terms of the integration over probability density functions. The probability that events have been human triggered/induced is derived from the modeling of Coulomb stress changes and a rate and state-dependent seismicity model. In our case a 3-D boundary element method has been adapted for the nuclei of strain approach to estimate the stress changes outside the reservoir, which are related to pore pressure changes in the field formation. The predicted rate of natural earthquakes is either derived from the background seismicity or, in case of rare events, from an estimate of the tectonic stress rate. Instrumentally derived seismological information on the event location, source mechanism, and the size of the rupture plane is of advantage for the method. If the rupture plane has been estimated, the discrimination between induced or only triggered events is theoretically possible if probability functions are convolved with a rupture fault filter. We apply the approach to three recent main shock events: (1) the M-w 4.3 Ekofisk 2001, North Sea, earthquake close to the Ekofisk oil field; (2) the M-w 4.4 Rotenburg 2004, Northern Germany, earthquake in the vicinity of the Sohlingen gas field; and (3) the M-w 6.1 Emilia 2012, Northern Italy, earthquake in the vicinity of a hydrocarbon reservoir. The three test cases cover the complete range of possible causes: clearly human induced, not even human triggered, and a third case in between both extremes.
We study changes in effective stress (normal stress minus pore pressure) that occurred in the French Alps during the 2003-2004 Ubaye earthquake swarm. Two complementary data sets are used. First, a set of 974 relocated events allows us to finely characterize the shape of the seismogenic area and the spatial migration of seismicity during the crisis. Relocations are performed by a double-difference algorithm. We compute differences in travel times at stations both from absolute picking times and from cross-correlation delays of multiplets. The resulting catalog reveals a swarm alignment along a single planar structure striking N130 degrees E and dipping 80 degrees W. This relocated activity displays migration properties consistent with a triggering by a diffusive fluid overpressure front. This observation argues in favor of a deep-seated fluid circulation responsible for a significant part of the seismic activity in Ubaye. Second, we analyze time series of earthquake detections at a single seismological station located just above the swarm. This time series forms a dense chronicle of +16,000 events. We use it to estimate the history of effective stress changes during this sequence. For this purpose we model the rate of events by a stochastic epidemic-type aftershock sequence model with a nonstationary background seismic rate lambda(0)(t). This background rate is estimated in discrete time windows. Window lengths are determined optimally according to a new change-point method on the basis of the interevent times distribution. We propose that background events are triggered directly by a transient fluid circulation at depth. Then, using rate-and-state constitutive friction laws, we estimate changes in effective stress for the observed rate of background events. We assume that changes in effective stress occurred under constant shear stressing rate conditions. We finally obtain a maximum change in effective stress close to -8 MPa, which corresponds to a maximum fluid overpressure of about 8 MPa under constant normal stress conditions. This estimate is in good agreement with values obtained from numerical modeling of fluid flow at depth, or with direct measurements reported from fluid injection experiments.
Geysers are hot springs whose frequency of water eruptions remain poorly understood. We set up a local broadband seismic network for 1 year at Strokkur geyser, Iceland, and developed an unprecedented catalog of 73,466 eruptions. We detected 50,135 single eruptions but find that the geyser is also characterized by sets of up to six eruptions in quick succession. The number of single to sextuple eruptions exponentially decreased, while the mean waiting time after an eruption linearly increased (3.7 to 16.4 min). While secondary eruptions within double to sextuple eruptions have a smaller mean seismic amplitude, the amplitude of the first eruption is comparable for all eruption types. We statistically model the eruption frequency assuming discharges proportional to the eruption multiplicity and a constant probability for subsequent events within a multituple eruption. The waiting time after an eruption is predictable but not the type or amplitude of the next one. <br /> Plain Language Summary Geysers are springs that often erupt in hot water fountains. They erupt more often than volcanoes but are quite similar. Nevertheless, it is poorly understood how often volcanoes and also geysers erupt. We created a list of 73,466 eruption times of Strokkur geyser, Iceland, from 1 year of seismic data. The geyser erupted one to six times in quick succession. We found 50,135 single eruptions but only 1 sextuple eruption, while the mean waiting time increased from 3.7 min after single eruptions to 16.4 min after sextuple eruptions. Mean amplitudes of each eruption type were higher for single eruptions, but all first eruptions in a succession were similar in height. Assuming a constant heat inflow at depth, we can predict the waiting time after an eruption but not the type or amplitude of the next one.
The aim of this paper is to characterize the spatio-temporal distribution of Central-Europe seismicity. Specifically, by using a non-parametric statistical approach, the proportional hazard model, leading to an empirical estimation of the hazard function, we provide some constrains on the time behavior of earthquake generation mechanisms. The results indicate that the most conspicuous characteristics of M-w 4.0+ earthquakes is a temporal clustering lasting a couple of years. This suggests that the probability of occurrence increases immediately after a previous event. After a few years, the process becomes almost time independent. Furthermore, we investigate the cluster properties of the seismicity of Central-Europe, by comparing the obtained result with the one of synthetic catalogs generated by the epidemic type aftershock sequences (ETAS) model, which previously have been successfully applied for short term clustering. Our results indicate that the ETAS is not well suited to describe the seismicity as a whole, while it is able to capture the features of the short- term behaviour. Remarkably, similar results have been previously found for Italy using a higher magnitude threshold.
The Gutenberg-Richter relation for earthquake magnitudes is the most famous empirical law in seismology. It states that the frequency of earthquake magnitudes follows an exponential distribution; this has been found to be a robust feature of seismicity above the completeness magnitude, and it is independent of whether global, regional, or local seismicity is analyzed. However, the exponent b of the distribution varies significantly in space and time, which is important for process understanding and seismic hazard assessment; this is particularly true because of the fact that the Gutenberg-Richter b-value acts as a proxy for the stress state and quantifies the ratio of large-to-small earthquakes. In our work, we focus on the automatic detection of statistically significant temporal changes of the b-value in seismicity data. In our approach, we use Bayes factors for model selection and estimate multiple change-points of the frequency-magnitude distribution in time. The method is first applied to synthetic data, showing its capability to detect change-points as function of the size of the sample and the b-value contrast. Finally, we apply this approach to examples of observational data sets for which b-value changes have previously been stated. Our analysis of foreshock and after-shock sequences related to mainshocks, as well as earthquake swarms, shows that only a portion of the b-value changes is statistically significant.
Earthquake rates are driven by tectonic stress buildup, earthquake-induced stress changes, and transient aseismic processes. Although the origin of the first two sources is known, transient aseismic processes are more difficult to detect. However, the knowledge of the associated changes of the earthquake activity is of great interest, because it might help identify natural aseismic deformation patterns such as slow-slip events, as well as the occurrence of induced seismicity related to human activities. For this goal, we develop a Bayesian approach to identify change-points in seismicity data automatically. Using the Bayes factor, we select a suitable model, estimate possible change-points, and we additionally use a likelihood ratio test to calculate the significance of the change of the intensity. The approach is extended to spatiotemporal data to detect the area in which the changes occur. The method is first applied to synthetic data showing its capability to detect real change-points. Finally, we apply this approach to observational data from Oklahoma and observe statistical significant changes of seismicity in space and time.