TY - JOUR A1 - Zöller, Gert A1 - Hainzl, Sebastian A1 - Tilmann, Frederik A1 - Woith, Heiko A1 - Dahm, Torsten T1 - Comment on: Wikelski, Martin; Müller, Uschi; Scocco, Paola; Catorci, Andrea; Desinov, Lev V.; Belyaev, Mikhail Y.; Keim, Daniel A.; Pohlmeier, Winfried; Fechteler, Gerhard; Mai, Martin P. : Potential short-term earthquake forecasting by farm animal monitoring. - Ethology. - 126 (2020), 9. - S. 931 - 941. -ISSN 0179-1613. - eISSN 1439-0310. - doi 10.1111/eth.13078 JF - Ethology N2 - Based on an analysis of continuous monitoring of farm animal behavior in the region of the 2016 M6.6 Norcia earthquake in Italy, Wikelski et al., 2020; (Seismol Res Lett, 89, 2020, 1238) conclude that animal activity can be anticipated with subsequent seismic activity and that this finding might help to design a "short-term earthquake forecasting method." We show that this result is based on an incomplete analysis and misleading interpretations. Applying state-of-the-art methods of statistics, we demonstrate that the proposed anticipatory patterns cannot be distinguished from random patterns, and consequently, the observed anomalies in animal activity do not have any forecasting power. KW - animal behavior KW - earthquake precursor KW - error diagram KW - prediction KW - randomness KW - statistics Y1 - 2020 U6 - https://doi.org/10.1111/eth.13105 SN - 0179-1613 SN - 1439-0310 VL - 127 IS - 3 SP - 302 EP - 306 PB - Wiley CY - Hoboken ER - TY - JOUR A1 - Zöller, Gert T1 - A note on the estimation of the maximum possible earthquake magnitude based on extreme value theory for the Groningen Gas Field JF - The bulletin of the Seismological Society of America : BSSA N2 - Extreme value statistics is a popular and frequently used tool to model the occurrence of large earthquakes. The problem of poor statistics arising from rare events is addressed by taking advantage of the validity of general statistical properties in asymptotic regimes. In this note, I argue that the use of extreme value statistics for the purpose of practically modeling the tail of the frequency-magnitude distribution of earthquakes can produce biased and thus misleading results because it is unknown to what degree the tail of the true distribution is sampled by data. Using synthetic data allows to quantify this bias in detail. The implicit assumption that the true M-max is close to the maximum observed magnitude M-max,M-observed restricts the class of the potential models a priori to those with M-max = M-max,M-observed + Delta M with an increment Delta M approximate to 0.5... 1.2. This corresponds to the simple heuristic method suggested by Wheeler (2009) and labeled :M-max equals M-obs plus an increment." The incomplete consideration of the entire model family for the frequency-magnitude distribution neglects, however, the scenario of a large so far unobserved earthquake. Y1 - 2022 U6 - https://doi.org/10.1785/0120210307 SN - 0037-1106 SN - 1943-3573 VL - 112 IS - 4 SP - 1825 EP - 1831 PB - Seismological Society of America CY - El Cerito, Calif. ER - TY - JOUR A1 - Zöller, Gert A1 - Ullah, Shahid A1 - Bindi, Dino A1 - Parolai, Stefano A1 - Mikhailova, Natalya T1 - The largest expected earthquake magnitudes in Central Asia BT - statistical inference from an earthquake catalogue with uncertain magnitudes JF - Seismicity, fault rupture and earthquake hazards in slowly deforming regions N2 - The knowledge of the largest expected earthquake magnitude in a region is one of the key issues in probabilistic seismic hazard calculations and the estimation of worst-case scenarios. Earthquake catalogues are the most informative source of information for the inference of earthquake magnitudes. We analysed the earthquake catalogue for Central Asia with respect to the largest expected magnitudes m(T) in a pre-defined time horizon T-f using a recently developed statistical methodology, extended by the explicit probabilistic consideration of magnitude errors. For this aim, we assumed broad error distributions for historical events, whereas the magnitudes of recently recorded instrumental earthquakes had smaller errors. The results indicate high probabilities for the occurrence of large events (M >= 8), even in short time intervals of a few decades. The expected magnitudes relative to the assumed maximum possible magnitude are generally higher for intermediate-depth earthquakes (51-300 km) than for shallow events (0-50 km). For long future time horizons, for example, a few hundred years, earthquakes with M >= 8.5 have to be taken into account, although, apart from the 1889 Chilik earthquake, it is probable that no such event occurred during the observation period of the catalogue. Y1 - 2017 SN - 978-1-86239-745-3 SN - 978-1-86239-964-8 U6 - https://doi.org/10.1144/SP432.3 SN - 0305-8719 VL - 432 SP - 29 EP - 40 PB - The Geological Society CY - London ER - TY - JOUR A1 - Schoppa, Lukas A1 - Sieg, Tobias A1 - Vogel, Kristin A1 - Zöller, Gert A1 - Kreibich, Heidi T1 - Probabilistic flood loss models for companies JF - Water resources research N2 - Flood loss modeling is a central component of flood risk analysis. Conventionally, this involves univariable and deterministic stage-damage functions. Recent advancements in the field promote the use of multivariable and probabilistic loss models, which consider variables beyond inundation depth and account for prediction uncertainty. Although companies contribute significantly to total loss figures, novel modeling approaches for companies are lacking. Scarce data and the heterogeneity among companies impede the development of company flood loss models. We present three multivariable flood loss models for companies from the manufacturing, commercial, financial, and service sector that intrinsically quantify prediction uncertainty. Based on object-level loss data (n = 1,306), we comparatively evaluate the predictive capacity of Bayesian networks, Bayesian regression, and random forest in relation to deterministic and probabilistic stage-damage functions, serving as benchmarks. The company loss data stem from four postevent surveys in Germany between 2002 and 2013 and include information on flood intensity, company characteristics, emergency response, private precaution, and resulting loss to building, equipment, and goods and stock. We find that the multivariable probabilistic models successfully identify and reproduce essential relationships of flood damage processes in the data. The assessment of model skill focuses on the precision of the probabilistic predictions and reveals that the candidate models outperform the stage-damage functions, while differences among the proposed models are negligible. Although the combination of multivariable and probabilistic loss estimation improves predictive accuracy over the entire data set, wide predictive distributions stress the necessity for the quantification of uncertainty. KW - flood loss estimation KW - probabilistic modeling KW - companies KW - multivariable KW - models Y1 - 2020 U6 - https://doi.org/10.1029/2020WR027649 SN - 0043-1397 SN - 1944-7973 VL - 56 IS - 9 PB - American Geophysical Union CY - Washington ER - TY - JOUR A1 - Sharma, Shubham A1 - Hainzl, Sebastian A1 - Zöller, Gert A1 - Holschneider, Matthias T1 - Is Coulomb stress the best choice for aftershock forecasting? JF - Journal of geophysical research : Solid earth N2 - The Coulomb failure stress (CFS) criterion is the most commonly used method for predicting spatial distributions of aftershocks following large earthquakes. However, large uncertainties are always associated with the calculation of Coulomb stress change. The uncertainties mainly arise due to nonunique slip inversions and unknown receiver faults; especially for the latter, results are highly dependent on the choice of the assumed receiver mechanism. Based on binary tests (aftershocks yes/no), recent studies suggest that alternative stress quantities, a distance-slip probabilistic model as well as deep neural network (DNN) approaches, all are superior to CFS with predefined receiver mechanism. To challenge this conclusion, which might have large implications, we use 289 slip inversions from SRCMOD database to calculate more realistic CFS values for a layered half-space and variable receiver mechanisms. We also analyze the effect of the magnitude cutoff, grid size variation, and aftershock duration to verify the use of receiver operating characteristic (ROC) analysis for the ranking of stress metrics. The observations suggest that introducing a layered half-space does not improve the stress maps and ROC curves. However, results significantly improve for larger aftershocks and shorter time periods but without changing the ranking. We also go beyond binary testing and apply alternative statistics to test the ability to estimate aftershock numbers, which confirm that simple stress metrics perform better than the classic Coulomb failure stress calculations and are also better than the distance-slip probabilistic model. Y1 - 2020 U6 - https://doi.org/10.1029/2020JB019553 SN - 2169-9313 SN - 2169-9356 VL - 125 IS - 9 PB - American Geophysical Union CY - Washington ER - TY - JOUR A1 - Fiedler, Bernhard A1 - Zöller, Gert A1 - Holschneider, Matthias A1 - Hainzl, Sebastian T1 - Multiple Change-Point Detection in Spatiotemporal Seismicity Data JF - Bulletin of the Seismological Society of America N2 - Earthquake rates are driven by tectonic stress buildup, earthquake-induced stress changes, and transient aseismic processes. Although the origin of the first two sources is known, transient aseismic processes are more difficult to detect. However, the knowledge of the associated changes of the earthquake activity is of great interest, because it might help identify natural aseismic deformation patterns such as slow-slip events, as well as the occurrence of induced seismicity related to human activities. For this goal, we develop a Bayesian approach to identify change-points in seismicity data automatically. Using the Bayes factor, we select a suitable model, estimate possible change-points, and we additionally use a likelihood ratio test to calculate the significance of the change of the intensity. The approach is extended to spatiotemporal data to detect the area in which the changes occur. The method is first applied to synthetic data showing its capability to detect real change-points. Finally, we apply this approach to observational data from Oklahoma and observe statistical significant changes of seismicity in space and time. Y1 - 2018 U6 - https://doi.org/10.1785/0120170236 SN - 0037-1106 SN - 1943-3573 VL - 108 IS - 3A SP - 1147 EP - 1159 PB - Seismological Society of America CY - Albany ER - TY - JOUR A1 - Zöller, Gert T1 - A statistical model for earthquake recurrence based on the assimilation of paleoseismicity, historic seismicity, and instrumental seismicity JF - Journal of geophysical research : Solid earth N2 - Paleoearthquakes and historic earthquakes are the most important source of information for the estimation of long-term earthquake recurrence intervals in fault zones, because corresponding sequences cover more than one seismic cycle. However, these events are often rare, dating uncertainties are enormous, and missing or misinterpreted events lead to additional problems. In the present study, I assume that the time to the next major earthquake depends on the rate of small and intermediate events between the large ones in terms of a clock change model. Mathematically, this leads to a Brownian passage time distribution for recurrence intervals. I take advantage of an earlier finding that under certain assumptions the aperiodicity of this distribution can be related to the Gutenberg-Richter b value, which can be estimated easily from instrumental seismicity in the region under consideration. In this way, both parameters of the Brownian passage time distribution can be attributed with accessible seismological quantities. This allows to reduce the uncertainties in the estimation of the mean recurrence interval, especially for short paleoearthquake sequences and high dating errors. Using a Bayesian framework for parameter estimation results in a statistical model for earthquake recurrence intervals that assimilates in a simple way paleoearthquake sequences and instrumental data. I present illustrative case studies from Southern California and compare the method with the commonly used approach of exponentially distributed recurrence times based on a stationary Poisson process. KW - statistical seismology KW - paleoearthquakes KW - stochastic models KW - seismic hazard Y1 - 2018 U6 - https://doi.org/10.1029/2017JB015099 SN - 2169-9313 SN - 2169-9356 VL - 123 IS - 6 SP - 4906 EP - 4921 PB - American Geophysical Union CY - Washington ER - TY - JOUR A1 - Salamat, Mona A1 - Zöller, Gert A1 - Zare, Mehdi A1 - Amini, Mortaza T1 - The maximum expected earthquake magnitudes in different future time intervals of six seismotectonic zones of Iran and its surroundings JF - Journal of seismology N2 - One of the crucial components in seismic hazard analysis is the estimation of the maximum earthquake magnitude and associated uncertainty. In the present study, the uncertainty related to the maximum expected magnitude mu is determined in terms of confidence intervals for an imposed level of confidence. Previous work by Salamat et al. (Pure Appl Geophys 174:763-777, 2017) shows the divergence of the confidence interval of the maximum possible magnitude m(max) for high levels of confidence in six seismotectonic zones of Iran. In this work, the maximum expected earthquake magnitude mu is calculated in a predefined finite time interval and imposed level of confidence. For this, we use a conceptual model based on a doubly truncated Gutenberg-Richter law for magnitudes with constant b-value and calculate the posterior distribution of mu for the time interval T-f in future. We assume a stationary Poisson process in time and a Gutenberg-Richter relation for magnitudes. The upper bound of the magnitude confidence interval is calculated for different time intervals of 30, 50, and 100 years and imposed levels of confidence alpha = 0.5, 0.1, 0.05, and 0.01. The posterior distribution of waiting times T-f to the next earthquake with a given magnitude equal to 6.5, 7.0, and7.5 are calculated in each zone. In order to find the influence of declustering, we use the original and declustered version of the catalog. The earthquake catalog of the territory of Iran and surroundings are subdivided into six seismotectonic zones Alborz, Azerbaijan, Central Iran, Zagros, Kopet Dagh, and Makran. We assume the maximum possible magnitude m(max) = 8.5 and calculate the upper bound of the confidence interval of mu in each zone. The results indicate that for short time intervals equal to 30 and 50 years and imposed levels of confidence 1 - alpha = 0.95 and 0.90, the probability distribution of mu is around mu = 7.16-8.23 in all seismic zones. KW - Maximum expected earthquake magnitude KW - Future time interval KW - Level of confidence KW - Iran Y1 - 2018 U6 - https://doi.org/10.1007/s10950-018-9780-7 SN - 1383-4649 SN - 1573-157X VL - 22 IS - 6 SP - 1485 EP - 1498 PB - Springer CY - Dordrecht ER - TY - JOUR A1 - Salamat, Mona A1 - Zöller, Gert A1 - Amini, Morteza T1 - Prediction of the Maximum Expected Earthquake Magnitude in Iran: BT - from a Catalog with Varying Magnitude of Completeness and Uncertain Magnitudes JF - Pure and applied geophysics N2 - This paper concerns the problem of predicting the maximum expected earthquake magnitude μ in a future time interval Tf given a catalog covering a time period T in the past. Different studies show the divergence of the confidence interval of the maximum possible earthquake magnitude m_{ max } for high levels of confidence (Salamat et al. 2017). Therefore, m_{ max } should be better replaced by μ (Holschneider et al. 2011). In a previous study (Salamat et al. 2018), μ is estimated for an instrumental earthquake catalog of Iran from 1900 onwards with a constant level of completeness ( {m0 = 5.5} ). In the current study, the Bayesian methodology developed by Zöller et al. (2014, 2015) is applied for the purpose of predicting μ based on the catalog consisting of both historical and instrumental parts. The catalog is first subdivided into six subcatalogs corresponding to six seismotectonic zones, and each of those zone catalogs is subsequently subdivided according to changes in completeness level and magnitude uncertainty. For this, broad and small error distributions are considered for historical and instrumental earthquakes, respectively. We assume that earthquakes follow a Poisson process in time and Gutenberg-Richter law in the magnitude domain with a priori unknown a and b values which are first estimated by Bayes' theorem and subsequently used to estimate μ. Imposing different values of m_{ max } for different seismotectonic zones namely Alborz, Azerbaijan, Central Iran, Zagros, Kopet Dagh and Makran, the results show considerable probabilities for the occurrence of earthquakes with Mw ≥ 7.5 in short Tf , whereas for long Tf, μ is almost equal to m_{ max } KW - Maximum expected earthquake magnitude KW - completeness levels KW - magnitude errors KW - Bayesian method KW - Iran Y1 - 2019 U6 - https://doi.org/10.1007/s00024-019-02141-3 SN - 0033-4553 SN - 1420-9136 VL - 176 IS - 8 SP - 3425 EP - 3438 PB - Springer CY - Basel ER - TY - JOUR A1 - Shcherbakov, Robert A1 - Zhuang, Jiancang A1 - Zöller, Gert A1 - Ogata, Yosihiko T1 - Forecasting the magnitude of the largest expected earthquake JF - Nature Communications N2 - The majority of earthquakes occur unexpectedly and can trigger subsequent sequences of events that can culminate in more powerful earthquakes. This self-exciting nature of seismicity generates complex clustering of earthquakes in space and time. Therefore, the problem of constraining the magnitude of the largest expected earthquake during a future time interval is of critical importance in mitigating earthquake hazard. We address this problem by developing a methodology to compute the probabilities for such extreme earthquakes to be above certain magnitudes. We combine the Bayesian methods with the extreme value theory and assume that the occurrence of earthquakes can be described by the Epidemic Type Aftershock Sequence process. We analyze in detail the application of this methodology to the 2016 Kumamoto, Japan, earthquake sequence. We are able to estimate retrospectively the probabilities of having large subsequent earthquakes during several stages of the evolution of this sequence. Y1 - 2019 U6 - https://doi.org/10.1038/s41467-019-11958-4 SN - 2041-1723 VL - 10 PB - Nature Publishing Group CY - London ER -