Refine
Has Fulltext
- no (28)
Year of publication
Language
- English (28)
Is part of the Bibliography
- yes (28)
Keywords
- statistical seismology (3)
- Earthquake interaction (2)
- Iran (2)
- Maximum expected earthquake magnitude (2)
- Seismicity and tectonics (2)
- Statistical seismology (2)
- forecasting (2)
- prediction (2)
- Bayesian (1)
- Bayesian inference (1)
Institute
- Institut für Mathematik (28) (remove)
The Groningen gas field serves as a natural laboratory for production-induced earthquakes, because no earthquakes were observed before the beginning of gas production. Increasing gas production rates resulted in growing earthquake activity and eventually in the occurrence of the 2012M(w) 3.6 Huizinge earthquake. At least since this event, a detailed seismic hazard and risk assessment including estimation of the maximum earthquake magnitude is considered to be necessary to decide on the future gas production. In this short note, we first apply state-of-the-art methods of mathematical statistics to derive confidence intervals for the maximum possible earthquake magnitude m(max). Second, we calculate the maximum expected magnitude M-T in the time between 2016 and 2024 for three assumed gas-production scenarios. Using broadly accepted physical assumptions and 90% confidence level, we suggest a value of m(max) 4.4, whereas M-T varies between 3.9 and 4.3, depending on the production scenario.
One of the crucial components in seismic hazard analysis is the estimation of the maximum earthquake magnitude and associated uncertainty. In the present study, the uncertainty related to the maximum expected magnitude mu is determined in terms of confidence intervals for an imposed level of confidence. Previous work by Salamat et al. (Pure Appl Geophys 174:763-777, 2017) shows the divergence of the confidence interval of the maximum possible magnitude m(max) for high levels of confidence in six seismotectonic zones of Iran. In this work, the maximum expected earthquake magnitude mu is calculated in a predefined finite time interval and imposed level of confidence. For this, we use a conceptual model based on a doubly truncated Gutenberg-Richter law for magnitudes with constant b-value and calculate the posterior distribution of mu for the time interval T-f in future. We assume a stationary Poisson process in time and a Gutenberg-Richter relation for magnitudes. The upper bound of the magnitude confidence interval is calculated for different time intervals of 30, 50, and 100 years and imposed levels of confidence alpha = 0.5, 0.1, 0.05, and 0.01. The posterior distribution of waiting times T-f to the next earthquake with a given magnitude equal to 6.5, 7.0, and7.5 are calculated in each zone. In order to find the influence of declustering, we use the original and declustered version of the catalog. The earthquake catalog of the territory of Iran and surroundings are subdivided into six seismotectonic zones Alborz, Azerbaijan, Central Iran, Zagros, Kopet Dagh, and Makran. We assume the maximum possible magnitude m(max) = 8.5 and calculate the upper bound of the confidence interval of mu in each zone. The results indicate that for short time intervals equal to 30 and 50 years and imposed levels of confidence 1 - alpha = 0.95 and 0.90, the probability distribution of mu is around mu = 7.16-8.23 in all seismic zones.
We show how the maximum magnitude within a predefined future time horizon may be estimated from an earthquake catalog within the context of Gutenberg-Richter statistics. The aim is to carry out a rigorous uncertainty assessment, and calculate precise confidence intervals based on an imposed level of confidence a. In detail, we present a model for the estimation of the maximum magnitude to occur in a time interval T-f in the future, given a complete earthquake catalog for a time period T in the past and, if available, paleoseismic events. For this goal, we solely assume that earthquakes follow a stationary Poisson process in time with unknown productivity Lambda and obey the Gutenberg-Richter law in magnitude domain with unknown b-value. The random variables. and b are estimated by means of Bayes theorem with noninformative prior distributions. Results based on synthetic catalogs and on retrospective calculations of historic catalogs from the highly active area of Japan and the low-seismicity, but high-risk region lower Rhine embayment (LRE) in Germany indicate that the estimated magnitudes are close to the true values. Finally, we discuss whether the techniques can be extended to meet the safety requirements for critical facilities such as nuclear power plants. For this aim, the maximum magnitude for all times has to be considered. In agreement with earlier work, we find that this parameter is not a useful quantity from the viewpoint of statistical inference.
The knowledge of the largest expected earthquake magnitude in a region is one of the key issues in probabilistic seismic hazard calculations and the estimation of worst-case scenarios. Earthquake catalogues are the most informative source of information for the inference of earthquake magnitudes. We analysed the earthquake catalogue for Central Asia with respect to the largest expected magnitudes m(T) in a pre-defined time horizon T-f using a recently developed statistical methodology, extended by the explicit probabilistic consideration of magnitude errors. For this aim, we assumed broad error distributions for historical events, whereas the magnitudes of recently recorded instrumental earthquakes had smaller errors. The results indicate high probabilities for the occurrence of large events (M >= 8), even in short time intervals of a few decades. The expected magnitudes relative to the assumed maximum possible magnitude are generally higher for intermediate-depth earthquakes (51-300 km) than for shallow events (0-50 km). For long future time horizons, for example, a few hundred years, earthquakes with M >= 8.5 have to be taken into account, although, apart from the 1889 Chilik earthquake, it is probable that no such event occurred during the observation period of the catalogue.
In the present study, we summarize and evaluate the endeavors from recent years to estimate the maximum possible earthquake magnitude m(max) from observed data. In particular, we use basic and physically motivated assumptions to identify best cases and worst cases in terms of lowest and highest degree of uncertainty of m(max). In a general framework, we demonstrate that earthquake data and earthquake proxy data recorded in a fault zone provide almost no information about m(max) unless reliable and homogeneous data of a long time interval, including several earthquakes with magnitude close to m(max), are available. Even if detailed earthquake information from some centuries including historic and paleoearthquakes are given, only very few, namely the largest events, will contribute at all to the estimation of m(max), and this results in unacceptably high uncertainties. As a consequence, estimators of m(max) in a fault zone, which are based solely on earthquake-related information from this region, have to be dismissed.
Both aftershocks and geodetically measured postseismic displacements are important markers of the stress relaxation process following large earthquakes. Postseismic displacements can be related to creep-like relaxation in the vicinity of the coseismic rupture by means of inversion methods. However, the results of slip inversions are typically non-unique and subject to large uncertainties. Therefore, we explore the possibility to improve inversions by mechanical constraints. In particular, we take into account the physical understanding that postseismic deformation is stress-driven, and occurs in the coseismically stressed zone. We do joint inversions for coseismic and postseismic slip in a Bayesian framework in the case of the 2004 M6.0 Parkfield earthquake. We perform a number of inversions with different constraints, and calculate their statistical significance. According to information criteria, the best result is preferably related to a physically reasonable model constrained by the stress-condition (namely postseismic creep is driven by coseismic stress) and the condition that coseismic slip and large aftershocks are disjunct. This model explains 97% of the coseismic displacements and 91% of the postseismic displacements during day 1-5 following the Parkfield event, respectively. It indicates that the major postseismic deformation can be generally explained by a stress relaxation process for the Parkfield case. This result also indicates that the data to constrain the coseismic slip model could be enriched postseismically. For the 2004 Parkfield event, we additionally observe asymmetric relaxation process at the two sides of the fault, which can be explained by material contrast ratio across the fault of similar to 1.15 in seismic velocity.
The Gutenberg-Richter (GR) and the Omori-Utsu (OU) law describe the earthquakes' energy release and temporal clustering and are thus of great importance for seismic hazard assessment. Motivated by experimental results, which indicate stress-dependent parameters, we consider a combined global data set of 127 main shock-aftershock sequences and perform a systematic study of the relationship between main shock-induced stress changes and associated seismicity patterns. For this purpose, we calculate space-dependent Coulomb Stress (& UDelta;CFS) and alternative receiver-independent stress metrics in the surrounding of the main shocks. Our results indicate a clear positive correlation between the GR b-value and the induced stress, contrasting expectations from laboratory experiments and suggesting a crucial role of structural heterogeneity and strength variations. Furthermore, we demonstrate that the aftershock productivity increases nonlinearly with stress, while the OU parameters c and p systematically decrease for increasing stress changes. Our partly unexpected findings can have an important impact on future estimations of the aftershock hazard.
In recent years, the triggering of earthquakes has been discussed controversially with respect to the underlying mechanisms and the capability to evaluate the resulting seismic hazard. Apart from static stress interactions, other mechanisms including dynamic stress transfer have been proposed to be part of a complex triggering process. Exploiting the theoretical relation between long-term earthquake rates and stressing rate, we demonstrate that static stress changes resulting from an earthquake rupture allow us to predict quantitatively the aftershock activity without tuning specific model parameters. These forecasts are found to be in excellent agreement with all first-order characteristics of aftershocks, in particular, (1) the total number, (2) the power law distance decay, (3) the scaling of the productivity with the main shock magnitude, (4) the foreshock probability, and (5) the empirical Bath law providing the maximum aftershock magnitude, which supports the conclusion that static stress transfer is the major mechanism of earthquake triggering.
Flood loss modeling is a central component of flood risk analysis. Conventionally, this involves univariable and deterministic stage-damage functions. Recent advancements in the field promote the use of multivariable and probabilistic loss models, which consider variables beyond inundation depth and account for prediction uncertainty. Although companies contribute significantly to total loss figures, novel modeling approaches for companies are lacking. Scarce data and the heterogeneity among companies impede the development of company flood loss models. We present three multivariable flood loss models for companies from the manufacturing, commercial, financial, and service sector that intrinsically quantify prediction uncertainty. Based on object-level loss data (n = 1,306), we comparatively evaluate the predictive capacity of Bayesian networks, Bayesian regression, and random forest in relation to deterministic and probabilistic stage-damage functions, serving as benchmarks. The company loss data stem from four postevent surveys in Germany between 2002 and 2013 and include information on flood intensity, company characteristics, emergency response, private precaution, and resulting loss to building, equipment, and goods and stock. We find that the multivariable probabilistic models successfully identify and reproduce essential relationships of flood damage processes in the data. The assessment of model skill focuses on the precision of the probabilistic predictions and reveals that the candidate models outperform the stage-damage functions, while differences among the proposed models are negligible. Although the combination of multivariable and probabilistic loss estimation improves predictive accuracy over the entire data set, wide predictive distributions stress the necessity for the quantification of uncertainty.