Refine
Has Fulltext
- no (18)
Language
- English (18) (remove)
Is part of the Bibliography
- yes (18) (remove)
Keywords
- Bayesian inference (1)
- Confidence interval (1)
- Geopotential theory (1)
- Hadley-Walker Circulation (1)
- Level of confidence (1)
- Maximum magnitude of earthquake (1)
- ODP 659 (1)
- ODP 721/722 (1)
- ODP 967 (1)
- Plio-Pleistocene (1)
Institute
- Institut für Geowissenschaften (18) (remove)
During the last 5 Ma the Earth's ocean-atmosphere system passed through several major transitions, many of which are discussed as possible triggers for human evolution. A classic in this context is the possible influence of the closure of the Panama Strait, the intensification of Northern Hemisphere Glaciation, a stepwise increase in aridity in Africa, and the first appearance of the genus Homo about 2.5 - 2.7 Ma ago. Apart from the fact that the correlation between these events does not necessarily imply causality, many attempts to establish a relationship between climate and evolution fail due to the challenge of precisely localizing an a priori unknown number of changes potentially underlying complex climate records. The kernel-based Bayesian inference approach applied here allows inferring the location, generic shape, and temporal scale of multiple transitions in established records of Plio-Pleistocene African climate. By defining a transparent probabilistic analysis strategy, we are able to identify conjoint changes occurring across the investigated terrigenous dust records from Ocean Drilling Programme (ODP) sites in the Atlantic Ocean (ODP 659), Arabian (ODP 721/722) and Mediterranean Sea (ODP 967). The study indicates a two-step transition in the African climate proxy records at (2.35-2.10) Ma and (1.70 - 1.50) Ma, that may be associated with the reorganization of the Hadley-Walker Circulation. .
The Coulomb failure stress (CFS) criterion is the most commonly used method for predicting spatial distributions of aftershocks following large earthquakes. However, large uncertainties are always associated with the calculation of Coulomb stress change. The uncertainties mainly arise due to nonunique slip inversions and unknown receiver faults; especially for the latter, results are highly dependent on the choice of the assumed receiver mechanism. Based on binary tests (aftershocks yes/no), recent studies suggest that alternative stress quantities, a distance-slip probabilistic model as well as deep neural network (DNN) approaches, all are superior to CFS with predefined receiver mechanism. To challenge this conclusion, which might have large implications, we use 289 slip inversions from SRCMOD database to calculate more realistic CFS values for a layered half-space and variable receiver mechanisms. We also analyze the effect of the magnitude cutoff, grid size variation, and aftershock duration to verify the use of receiver operating characteristic (ROC) analysis for the ranking of stress metrics. The observations suggest that introducing a layered half-space does not improve the stress maps and ROC curves. However, results significantly improve for larger aftershocks and shorter time periods but without changing the ranking. We also go beyond binary testing and apply alternative statistics to test the ability to estimate aftershock numbers, which confirm that simple stress metrics perform better than the classic Coulomb failure stress calculations and are also better than the distance-slip probabilistic model.
The Gutenberg-Richter relation for earthquake magnitudes is the most famous empirical law in seismology. It states that the frequency of earthquake magnitudes follows an exponential distribution; this has been found to be a robust feature of seismicity above the completeness magnitude, and it is independent of whether global, regional, or local seismicity is analyzed. However, the exponent b of the distribution varies significantly in space and time, which is important for process understanding and seismic hazard assessment; this is particularly true because of the fact that the Gutenberg-Richter b-value acts as a proxy for the stress state and quantifies the ratio of large-to-small earthquakes. In our work, we focus on the automatic detection of statistically significant temporal changes of the b-value in seismicity data. In our approach, we use Bayes factors for model selection and estimate multiple change-points of the frequency-magnitude distribution in time. The method is first applied to synthetic data, showing its capability to detect change-points as function of the size of the sample and the b-value contrast. Finally, we apply this approach to examples of observational data sets for which b-value changes have previously been stated. Our analysis of foreshock and after-shock sequences related to mainshocks, as well as earthquake swarms, shows that only a portion of the b-value changes is statistically significant.
The additional magnetic field produced by the ionospheric current system is a part of the Earth’s magnetic field. This current system is a highly variable part of a global electric circuit. The solar wind and interplanetary magnetic field (IMF) interaction with the Earth’s magnetosphere is the external driver for the global electric circuit in the ionosphere. The energy is transferred via the field-aligned currents (FACs) to the Earth’s ionosphere. The interactions between the neutral and charged particles in the ionosphere lead to the so-called thermospheric neutral wind dynamo which represents the second important driver for the global current system. Both processes are components of the magnetosphere–ionosphere–thermosphere (MIT) system, which depends on solar and geomagnetic conditions, and have significant seasonal and UT variations.
The modeling of the global dynamic Earth’s ionospheric current system is the first aim of this investigation. For our study, we use the Potsdam version of the Upper Atmosphere Model (UAM-P). The UAM is a first-principle, time-dependent, and fully self-consistent numerical global model. The model includes the thermosphere, ionosphere, plasmasphere, and inner magnetosphere as well as the electrodynamics of the coupled MIT system for the altitudinal range from 80 (60) km up to the 15 Earth radii. The UAM-P differs from the UAM by a new electric field block. For this study, the lower latitudinal and equatorial electrodynamics of the UAM-P model was improved.
The calculation of the ionospheric current system’s contribution to the Earth’s magnetic field is the second aim of this study. We present the method, which allows computing the additional magnetic field inside and outside the current layer as generated by the space current density distribution using the Biot-Savart law. Additionally, we perform a comparison of the additional magnetic field calculation using 2D (equivalent currents) and 3D current distribution.
We propose a reduced dynamical system describing the coupled evolution of fluid flow and magnetic field at the top of the Earth's core between the years 1900 and 2014. The flow evolution is modeled with a first-order autoregressive process, while the magnetic field obeys the classical frozen flux equation. An ensemble Kalman filter algorithm serves to constrain the dynamics with the geomagnetic field and its secular variation given by the COV-OBS.x1 model. Using a large ensemble with 40,000 members provides meaningful statistics including reliable error estimates. The model highlights two distinct flow scales. Slowly varying large-scale elements include the already documented eccentric gyre. Localized short-lived structures include distinctly ageostophic features like the high-latitude polar jet on the Northern Hemisphere. Comparisons with independent observations of the length-of-day variations not only validate the flow estimates but also suggest an acceleration of the geostrophic flows over the last century. Hindcasting tests show that our model outperforms simpler predictions bases (linear extrapolation and stationary flow). The predictability limit, of about 2,000 years for the magnetic dipole component, is mostly determined by the random fast varying dynamics of the flow and much less by the geomagnetic data quality or lack of small-scale information.
Preface
(2018)
The problem of estimating the maximum possible earthquake magnitude m(max) has attracted growing attention in recent years. Due to sparse data, the role of uncertainties becomes crucial. In this work, we determine the uncertainties related to the maximum magnitude in terms of confidence intervals. Using an earthquake catalog of Iran, m(max) is estimated for different predefined levels of confidence in six seismotectonic zones. Assuming the doubly truncated Gutenberg-Richter distribution as a statistical model for earthquake magnitudes, confidence intervals for the maximum possible magnitude of earthquakes are calculated in each zone. While the lower limit of the confidence interval is the magnitude of the maximum observed event, the upper limit is calculated from the catalog and the statistical model. For this aim, we use the original catalog which no declustering methods applied on as well as a declustered version of the catalog. Based on the study by Holschneider et al. (Bull Seismol Soc Am 101(4): 1649-1659, 2011), the confidence interval for m(max) is frequently unbounded, especially if high levels of confidence are required. In this case, no information is gained from the data. Therefore, we elaborate for which settings finite confidence levels are obtained. In this work, Iran is divided into six seismotectonic zones, namely Alborz, Azerbaijan, Zagros, Makran, Kopet Dagh, Central Iran. Although calculations of the confidence interval in Central Iran and Zagros seismotectonic zones are relatively acceptable for meaningful levels of confidence, results in Kopet Dagh, Alborz, Azerbaijan and Makran are not that much promising. The results indicate that estimating mmax from an earthquake catalog for reasonable levels of confidence alone is almost impossible.
The aim of this paper is to estimate the Hurst parameter of Fractional Gaussian Noise (FGN) using Bayesian inference. We propose an estimation technique that takes into account the full correlation structure of this process. Instead of using the integrated time series and then applying an estimator for its Hurst exponent, we propose to use the noise signal directly. As an application we analyze the time series of the Nile River, where we find a posterior distribution which is compatible with previous findings. In addition, our technique provides natural error bars for the Hurst exponent.
Wavelet modelling of the gravity field by domain decomposition methods: an example over Japan
(2011)
With the advent of satellite gravity, large gravity data sets of unprecedented quality at low and medium resolution become available. For local, high resolution field modelling, they need to be combined with the surface gravity data. Such models are then used for various applications, from the study of the Earth interior to the determination of oceanic currents. Here we show how to realize such a combination in a flexible way using spherical wavelets and applying a domain decomposition approach. This iterative method, based on the Schwarz algorithms, allows to split a large problem into smaller ones, and avoids the calculation of the entire normal system, which may be huge if high resolution is sought over wide areas. A subdomain is defined as the harmonic space spanned by a subset of the wavelet family. Based on the localization properties of the wavelets in space and frequency, we define hierarchical subdomains of wavelets at different scales. On each scale, blocks of subdomains are defined by using a tailored spatial splitting of the area. The data weighting and regularization are iteratively adjusted for the subdomains, which allows to handle heterogeneity in the data quality or the gravity variations. Different levels of approximations of the subdomains normals are also introduced, corresponding to building local averages of the data at different resolution levels.
We first provide the theoretical background on domain decomposition methods. Then, we validate the method with synthetic data, considering two kinds of noise: white noise and coloured noise. We then apply the method to data over Japan, where we combine a satellite-based geopotential model, EIGEN-GL04S, and a local gravity model from a combination of land and marine gravity data and an altimetry-derived marine gravity model. A hybrid spherical harmonics/wavelet model of the geoid is obtained at about 15 km resolution and a corrector grid for the surface model is derived.
We present an alarm-based earthquake forecast model that uses the early aftershock statistics (EAST). This model is based on the hypothesis that the time delay before the onset of the power-law aftershock decay rate decreases as the level of stress and the seismogenic potential increase. Here, we estimate this time delay from < t(g)>, the time constant of the Omori-Utsu law. To isolate space-time regions with a relative high level of stress, the single local variable of our forecast model is the E-a value, the ratio between the long-term and short-term estimations of < t(g)>. When and where the E-a value exceeds a given threshold (i.e., the c value is abnormally small), an alarm is issued, and an earthquake is expected to occur during the next time step. Retrospective tests show that the EAST model has better predictive power than a stationary reference model based on smoothed extrapolation of past seismicity. The official prospective test for California started on 1 July 2009 in the testing center of the Collaboratory for the Study of Earthquake Predictability (CSEP). During the first nine months, 44 M >= 4 earthquakes occurred in the testing area. For this time period, the EAST model has better predictive power than the reference model at a 1% level of significance. Because the EAST model has also a better predictive power than several time-varying clustering models tested in CSEP at a 1% level of significance, we suggest that our successful prospective results are not due only to the space-time clustering of aftershocks.