Refine
Year of publication
Document Type
- Article (75)
- Other (3)
- Postprint (3)
- Conference Proceeding (1)
Language
- English (82) (remove)
Is part of the Bibliography
- yes (82)
Keywords
- Geomagnetic field (3)
- Wavelet transform (3)
- Bayesian inference (2)
- Geopotential theory (2)
- Kalman filter (2)
- Probabilistic forecasting (2)
- Satellite geodesy (2)
- geomagnetic field (2)
- geomagnetic storm (2)
- magnetosphere (2)
Institute
- Institut für Mathematik (41)
- Institut für Geowissenschaften (18)
- Institut für Physik und Astronomie (11)
- Mathematisch-Naturwissenschaftliche Fakultät (3)
- Department Psychologie (2)
- Institut für Biochemie und Biologie (1)
- Institut für Chemie (1)
- Institut für Informatik und Computational Science (1)
- Institut für Umweltwissenschaften und Geographie (1)
Earthquake catalogs are probably the most informative data source about spatiotemporal seismicity evolution. The catalog quality in one of the most active seismogenic zones in the world, Japan, is excellent, although changes in quality arising, for example, from an evolving network are clearly present. Here, we seek the best estimate for the largest expected earthquake in a given future time interval from a combination of historic and instrumental earthquake catalogs. We extend the technique introduced by Zoller et al. (2013) to estimate the maximum magnitude in a time window of length T-f for earthquake catalogs with varying level of completeness. In particular, we consider the case in which two types of catalogs are available: a historic catalog and an instrumental catalog. This leads to competing interests with respect to the estimation of the two parameters from the Gutenberg-Richter law, the b-value and the event rate lambda above a given lower-magnitude threshold (the a-value). The b-value is estimated most precisely from the frequently occurring small earthquakes; however, the tendency of small events to cluster in aftershocks, swarms, etc. violates the assumption of a Poisson process that is used for the estimation of lambda. We suggest addressing conflict by estimating b solely from instrumental seismicity and using large magnitude events from historic catalogs for the earthquake rate estimation. Applying the method to Japan, there is a probability of about 20% that the maximum expected magnitude during any future time interval of length T-f = 30 years is m >= 9.0. Studies of different subregions in Japan indicates high probabilities for M 8 earthquakes along the Tohoku arc and relatively low probabilities in the Tokai, Tonankai, and Nankai region. Finally, for scenarios related to long-time horizons and high-confidence levels, the maximum expected magnitude will be around 10.
We show how the maximum magnitude within a predefined future time horizon may be estimated from an earthquake catalog within the context of Gutenberg-Richter statistics. The aim is to carry out a rigorous uncertainty assessment, and calculate precise confidence intervals based on an imposed level of confidence a. In detail, we present a model for the estimation of the maximum magnitude to occur in a time interval T-f in the future, given a complete earthquake catalog for a time period T in the past and, if available, paleoseismic events. For this goal, we solely assume that earthquakes follow a stationary Poisson process in time with unknown productivity Lambda and obey the Gutenberg-Richter law in magnitude domain with unknown b-value. The random variables. and b are estimated by means of Bayes theorem with noninformative prior distributions. Results based on synthetic catalogs and on retrospective calculations of historic catalogs from the highly active area of Japan and the low-seismicity, but high-risk region lower Rhine embayment (LRE) in Germany indicate that the estimated magnitudes are close to the true values. Finally, we discuss whether the techniques can be extended to meet the safety requirements for critical facilities such as nuclear power plants. For this aim, the maximum magnitude for all times has to be considered. In agreement with earlier work, we find that this parameter is not a useful quantity from the viewpoint of statistical inference.
The injection of fluids is a well-known origin for the triggering of earthquake sequences. The growing number of projects related to enhanced geothermal systems, fracking, and others has led to the question, which maximum earthquake magnitude can be expected as a consequence of fluid injection? This question is addressed from the perspective of statistical analysis. Using basic empirical laws of earthquake statistics, we estimate the magnitude M-T of the maximum expected earthquake in a predefined future time window T-f. A case study of the fluid injection site at Paradox Valley, Colorado, demonstrates that the magnitude m 4.3 of the largest observed earthquake on 27 May 2000 lies very well within the expectation from past seismicity without adjusting any parameters. Vice versa, for a given maximum tolerable earthquake at an injection site, we can constrain the corresponding amount of injected fluids that must not be exceeded within predefined confidence bounds.
The Groningen gas field serves as a natural laboratory for production-induced earthquakes, because no earthquakes were observed before the beginning of gas production. Increasing gas production rates resulted in growing earthquake activity and eventually in the occurrence of the 2012M(w) 3.6 Huizinge earthquake. At least since this event, a detailed seismic hazard and risk assessment including estimation of the maximum earthquake magnitude is considered to be necessary to decide on the future gas production. In this short note, we first apply state-of-the-art methods of mathematical statistics to derive confidence intervals for the maximum possible earthquake magnitude m(max). Second, we calculate the maximum expected magnitude M-T in the time between 2016 and 2024 for three assumed gas-production scenarios. Using broadly accepted physical assumptions and 90% confidence level, we suggest a value of m(max) 4.4, whereas M-T varies between 3.9 and 4.3, depending on the production scenario.
In the present study, we summarize and evaluate the endeavors from recent years to estimate the maximum possible earthquake magnitude m(max) from observed data. In particular, we use basic and physically motivated assumptions to identify best cases and worst cases in terms of lowest and highest degree of uncertainty of m(max). In a general framework, we demonstrate that earthquake data and earthquake proxy data recorded in a fault zone provide almost no information about m(max) unless reliable and homogeneous data of a long time interval, including several earthquakes with magnitude close to m(max), are available. Even if detailed earthquake information from some centuries including historic and paleoearthquakes are given, only very few, namely the largest events, will contribute at all to the estimation of m(max), and this results in unacceptably high uncertainties. As a consequence, estimators of m(max) in a fault zone, which are based solely on earthquake-related information from this region, have to be dismissed.
We present a Bayesian method that allows continuous updating the aperiodicity of the recurrence time distribution of large earthquakes based on a catalog with magnitudes above a completeness threshold. The approach uses a recently proposed renewal model for seismicity and allows the inclusion of magnitude uncertainties in a straightforward manner. Errors accounting for grouped magnitudes and random errors are studied and discussed. The results indicate that a stable and realistic value of the aperiodicity can be predicted in an early state of seismicity evolution, even though only a small number of large earthquakes has occurred to date. Furthermore, we demonstrate that magnitude uncertainties can drastically influence the results and can therefore not be neglected. We show how to correct for the bias caused by magnitude errors. For the region of Parkfield we find that the aperiodicity, or the coefficient of variation, is clearly higher than in studies which are solely based on the large earthquakes.
We investigate the influence of spatial heterogeneities on various aspects of brittle failure and seismicity in a model of a large strike-slip fault. The model dynamics is governed by realistic boundary conditions consisting of constant velocity motion of regions around the fault, static/kinetic friction laws, creep with depth-dependent coefficients, and 3-D elastic stress transfer. The dynamic rupture is approximated on a continuous time scale using a finite stress propagation velocity ("quasidynamic model''). The model produces a "brittle- ductile'' transition at a depth of about 12.5 km, realistic hypocenter distributions, and other features of seismicity compatible with observations. Previous work suggested that the range of size scales in the distribution of strength-stress heterogeneities acts as a tuning parameter of the dynamics. Here we test this hypothesis by performing a systematic parameter-space study with different forms of heterogeneities. In particular, we analyze spatial heterogeneities that can be tuned by a single parameter in two distributions: ( 1) high stress drop barriers in near- vertical directions and ( 2) spatial heterogeneities with fractal properties and variable fractal dimension. The results indicate that the first form of heterogeneities provides an effective means of tuning the behavior while the second does not. In relatively homogeneous cases, the fault self-organizes to large-scale patches and big events are associated with inward failure of individual patches and sequential failures of different patches. The frequency-size event statistics in such cases are compatible with the characteristic earthquake distribution and large events are quasi-periodic in time. In strongly heterogeneous or near-critical cases, the rupture histories are highly discontinuous and consist of complex migration patterns of slip on the fault. In such cases, the frequency-size and temporal statistics follow approximately power-law relations
We show that realistic aftershock sequences with space-time characteristics compatible with observations are generated by a model consisting of brittle fault segments separated by creeping zones. The dynamics of the brittle regions is governed by static/kinetic friction, 3D elastic stress transfer and small creep deformation. The creeping parts are characterized by high ongoing creep velocities. These regions store stress during earthquake failures and then release it in the interseismic periods. The resulting postseismic deformation leads to aftershock sequences following the modified Omori law. The ratio of creep coefficients in the brittle and creeping sections determines the duration of the postseismic transients and the exponent p of the modified Omori law
The dynamics of external contributions to the geomagnetic field is investigated by applying time-frequency methods to magnetic observatory data. Fractal models and multiscale analysis enable obtaining maximum quantitative information related to the short-term dynamics of the geomagnetic field activity. The stochastic properties of the horizontal component of the transient external field are determined by searching for scaling laws in the power spectra. The spectrum fits a power law with a scaling exponent β, a typical characteristic of self-affine time-series. Local variations in the power-law exponent are investigated by applying wavelet analysis to the same time-series. These analyses highlight the self-affine properties of geomagnetic perturbations and their persistence. Moreover, they show that the main phases of sudden storm disturbances are uniquely characterized by a scaling exponent varying between 1 and 3, possibly related to the energy contained in the external field. These new findings suggest the existence of a long-range dependence, the scaling exponent being an efficient indicator of geomagnetic activity and singularity detection. These results show that by using magnetogram regularity to reflect the magnetosphere activity, a theoretical analysis of the external geomagnetic field based on local power-law exponents is possible.
We explore fluctuations of the horizontal component of the Earth's magnetic field to identify scaling behaviour of the temporal variability in geomagnetic data recorded by the Intermagnet observatories during the solar cycle 23 (years 1996 to 2005). In this work, we use the remarkable ability of scaling wavelet exponents to highlight the singularities associated with discontinuities present in the magnetograms obtained at two magnetic observatories for six intense magnetic storms, including the sudden storm commencements of 14 July 2000, 29-31 October and 20-21 November 2003. In the active intervals that occurred during geomagnetic storms, we observe a rapid and unidirectional change in the spectral scaling exponent at the time of storm onset. The corresponding fractal features suggest that the dynamics of the whole time series is similar to that of a fractional Brownian motion. Our findings point to an evident relatively sudden change related to the emergence of persistency of the fractal power exponent fluctuations precedes an intense magnetic storm. These first results could be useful in the framework of extreme events prediction studies.
The dynamics of external contributions to the geomagnetic field is investigated by applying time-frequency methods to magnetic observatory data. Fractal models and multiscale analysis enable obtaining maximum quantitative information related to the short-term dynamics of the geomagnetic field activity. The stochastic properties of the horizontal component of the transient external field are determined by searching for scaling laws in the power spectra. The spectrum fits a power law with a scaling exponent beta, a typical characteristic of self-affine time-series. Local variations in the power-law exponent are investigated by applying wavelet analysis to the same time-series. These analyses highlight the self-affine properties of geomagnetic perturbations and their persistence. Moreover, they show that the main phases of sudden storm disturbances are uniquely characterized by a scaling exponent varying between 1 and 3, possibly related to the energy contained in the external field. These new findings suggest the existence of a long-range dependence, the scaling exponent being an efficient indicator of geomagnetic activity and singularity detection. These results show that by using magnetogram regularity to reflect the magnetosphere activity, a theoretical analysis of the external geomagnetic field based on local power-law exponents is possible.
Both aftershocks and geodetically measured postseismic displacements are important markers of the stress relaxation process following large earthquakes. Postseismic displacements can be related to creep-like relaxation in the vicinity of the coseismic rupture by means of inversion methods. However, the results of slip inversions are typically non-unique and subject to large uncertainties. Therefore, we explore the possibility to improve inversions by mechanical constraints. In particular, we take into account the physical understanding that postseismic deformation is stress-driven, and occurs in the coseismically stressed zone. We do joint inversions for coseismic and postseismic slip in a Bayesian framework in the case of the 2004 M6.0 Parkfield earthquake. We perform a number of inversions with different constraints, and calculate their statistical significance. According to information criteria, the best result is preferably related to a physically reasonable model constrained by the stress-condition (namely postseismic creep is driven by coseismic stress) and the condition that coseismic slip and large aftershocks are disjunct. This model explains 97% of the coseismic displacements and 91% of the postseismic displacements during day 1-5 following the Parkfield event, respectively. It indicates that the major postseismic deformation can be generally explained by a stress relaxation process for the Parkfield case. This result also indicates that the data to constrain the coseismic slip model could be enriched postseismically. For the 2004 Parkfield event, we additionally observe asymmetric relaxation process at the two sides of the fault, which can be explained by material contrast ratio across the fault of similar to 1.15 in seismic velocity.
We develop a multigrid, multiple time stepping scheme to reduce computational efforts for calculating complex stress interactions in a strike-slip 2D planar fault for the simulation of seismicity. The key elements of the multilevel solver are separation of length scale, grid-coarsening, and hierarchy. In this study the complex stress interactions are split into two parts: the first with a small contribution is computed on a coarse level, and the rest for strong interactions is on a fine level. This partition leads to a significant reduction of the number of computations. The reduction of complexity is even enhanced by combining the multigrid with multiple time stepping. Computational efficiency is enhanced by a factor of 10 while retaining a reasonable accuracy, compared to the original full matrix-vortex multiplication. The accuracy of solution and computational efficiency depend on a given cut-off radius that splits multiplications into the two parts. The multigrid scheme is constructed in such a way that it conserves stress in the entire half-space.
We present an alarm-based earthquake forecast model that uses the early aftershock statistics (EAST). This model is based on the hypothesis that the time delay before the onset of the power-law aftershock decay rate decreases as the level of stress and the seismogenic potential increase. Here, we estimate this time delay from < t(g)>, the time constant of the Omori-Utsu law. To isolate space-time regions with a relative high level of stress, the single local variable of our forecast model is the E-a value, the ratio between the long-term and short-term estimations of < t(g)>. When and where the E-a value exceeds a given threshold (i.e., the c value is abnormally small), an alarm is issued, and an earthquake is expected to occur during the next time step. Retrospective tests show that the EAST model has better predictive power than a stationary reference model based on smoothed extrapolation of past seismicity. The official prospective test for California started on 1 July 2009 in the testing center of the Collaboratory for the Study of Earthquake Predictability (CSEP). During the first nine months, 44 M >= 4 earthquakes occurred in the testing area. For this time period, the EAST model has better predictive power than the reference model at a 1% level of significance. Because the EAST model has also a better predictive power than several time-varying clustering models tested in CSEP at a 1% level of significance, we suggest that our successful prospective results are not due only to the space-time clustering of aftershocks.
We propose a conversion method from alarm-based to rate-based earthquake forecast models. A differential probability gain g(alarm)(ref) is the absolute value of the local slope of the Molchan trajectory that evaluates the performance of the alarm-based model with respect to the chosen reference model. We consider that this differential probability gain is constant over time. Its value at each point of the testing region depends only on the alarm function value. The rate-based model is the product of the event rate of the reference model at this point multiplied by the corresponding differential probability gain. Thus, we increase or decrease the initial rates of the reference model according to the additional amount of information contained in the alarm-based model. Here, we apply this method to the Early Aftershock STatistics (EAST) model, an alarm-based model in which early aftershocks are used to identify space-time regions with a higher level of stress and, consequently, a higher seismogenic potential. The resulting rate-based model shows similar performance to the original alarm-based model for all ranges of earthquake magnitude in both retrospective and prospective tests. This conversion method offers the opportunity to perform all the standard evaluation tests of the earthquake testing centers on alarm-based models. In addition, we infer that it can also be used to consecutively combine independent forecast models and, with small modifications, seismic hazard maps with short- and medium-term forecasts.
We describe an iterative method to combine seismicity forecasts. With this method, we produce the next generation of a starting forecast by incorporating predictive skill from one or more input forecasts. For a single iteration, we use the differential probability gain of an input forecast relative to the starting forecast. At each point in space and time, the rate in the next-generation forecast is the product of the starting rate and the local differential probability gain. The main advantage of this method is that it can produce high forecast rates using all types of numerical forecast models, even those that are not rate-based. Naturally, a limitation of this method is that the input forecast must have some information not already contained in the starting forecast. We illustrate this method using the Every Earthquake a Precursor According to Scale (EEPAS) and Early Aftershocks Statistics (EAST) models, which are currently being evaluated at the US testing center of the Collaboratory for the Study of Earthquake Predictability. During a testing period from July 2009 to December 2011 (with 19 target earthquakes), the combined model we produce has better predictive performance - in terms of Molchan diagrams and likelihood - than the starting model (EEPAS) and the input model (EAST). Many of the target earthquakes occur in regions where the combined model has high forecast rates. Most importantly, the rates in these regions are substantially higher than if we had simply averaged the models.
The Coulomb failure stress (CFS) criterion is the most commonly used method for predicting spatial distributions of aftershocks following large earthquakes. However, large uncertainties are always associated with the calculation of Coulomb stress change. The uncertainties mainly arise due to nonunique slip inversions and unknown receiver faults; especially for the latter, results are highly dependent on the choice of the assumed receiver mechanism. Based on binary tests (aftershocks yes/no), recent studies suggest that alternative stress quantities, a distance-slip probabilistic model as well as deep neural network (DNN) approaches, all are superior to CFS with predefined receiver mechanism. To challenge this conclusion, which might have large implications, we use 289 slip inversions from SRCMOD database to calculate more realistic CFS values for a layered half-space and variable receiver mechanisms. We also analyze the effect of the magnitude cutoff, grid size variation, and aftershock duration to verify the use of receiver operating characteristic (ROC) analysis for the ranking of stress metrics. The observations suggest that introducing a layered half-space does not improve the stress maps and ROC curves. However, results significantly improve for larger aftershocks and shorter time periods but without changing the ranking. We also go beyond binary testing and apply alternative statistics to test the ability to estimate aftershock numbers, which confirm that simple stress metrics perform better than the classic Coulomb failure stress calculations and are also better than the distance-slip probabilistic model.
Change points in time series are perceived as isolated singularities where two regular trends of a given signal do not match. The detection of such transitions is of fundamental interest for the understanding of the system's internal dynamics or external forcings. In practice observational noise makes it difficult to detect such change points in time series. In this work we elaborate on a Bayesian algorithm to estimate the location of the singularities and to quantify their credibility. We validate the performance and sensitivity of our inference method by estimating change points of synthetic data sets. As an application we use our algorithm to analyze the annual flow volume of the Nile River at Aswan from 1871 to 1970, where we confirm a well-established significant transition point within the time series.
We use a dynamic scanning electron microscope (DySEM) to analyze the movement of oscillating micromechanical structures. A dynamic secondary electron (SE) signal is recorded and correlated to the oscillatory excitation of scanning force microscope (SFM) cantilever by means of lock-in amplifiers. We show, how the relative phase of the oscillations modulate the resulting real part and phase pictures of the DySEM mapping. This can be used to obtain information about the underlying oscillatory dynamics. We apply the theory to the case of a cantilever in oscillation, driven at different flexural and torsional resonance modes. This is an extension of a recent work (Schroter et al 2012 Nanotechnology 23 435501), where we reported on a general methodology to distinguish nonlinear features caused by the imaging process from those caused by cantilever motion.
Analytical and numerical analysis of imaging mechanism of dynamic scanning electron microscopy
(2012)
The direct observation of small oscillating structures with the help of a scanning electron beam is a new approach to study the vibrational dynamics of cantilevers and microelectromechanical systems. In the scanning electron microscope, the conventional signal of secondary electrons (SE, dc part) is separated from the signal response of the SE detector, which is correlated to the respective excitation frequency for vibration by means of a lock-in amplifier. The dynamic response is separated either into images of amplitude and phase shift or into real and imaginary parts. Spatial resolution is limited to the diameter of the electron beam. The sensitivity limit to vibrational motion is estimated to be sub-nanometer for high integration times. Due to complex imaging mechanisms, a theoretical model was developed for the interpretation of the obtained measurements, relating cantilever shapes to interaction processes consisting of incident electron beam, electron-lever interaction, emitted electrons and detector response. Conclusions drawn from this new model are compared with numerical results based on the Euler-Bernoulli equation.
We use a dynamic scanning electron microscope (DySEM) to map the spatial distribution of the vibration of a cantilever beam. The DySEM measurements are based on variations of the local secondary electron signal within the imaging electron beam diameter during an oscillation period of the cantilever. For this reason, the surface of a cantilever without topography or material variation does not allow any conclusions about the spatial distribution of vibration due to a lack of dynamic contrast. In order to overcome this limitation, artificial structures were added at defined positions on the cantilever surface using focused ion beam lithography patterning. The DySEM signal of such high-contrast structures is strongly improved, hence information about the surface vibration becomes accessible. Simulations of images of the vibrating cantilever have also been performed. The results of the simulation are in good agreement with the experimental images.
Analysis of protrusion dynamics in amoeboid cell motility by means of regularized contour flows
(2021)
Amoeboid cell motility is essential for a wide range of biological processes including wound healing, embryonic morphogenesis, and cancer metastasis. It relies on complex dynamical patterns of cell shape changes that pose long-standing challenges to mathematical modeling and raise a need for automated and reproducible approaches to extract quantitative morphological features from image sequences. Here, we introduce a theoretical framework and a computational method for obtaining smooth representations of the spatiotemporal contour dynamics from stacks of segmented microscopy images. Based on a Gaussian process regression we propose a one-parameter family of regularized contour flows that allows us to continuously track reference points (virtual markers) between successive cell contours. We use this approach to define a coordinate system on the moving cell boundary and to represent different local geometric quantities in this frame of reference. In particular, we introduce the local marker dispersion as a measure to identify localized membrane expansions and provide a fully automated way to extract the properties of such expansions, including their area and growth time. The methods are available as an open-source software package called AmoePy, a Python-based toolbox for analyzing amoeboid cell motility (based on time-lapse microscopy data), including a graphical user interface and detailed documentation. Due to the mathematical rigor of our framework, we envision it to be of use for the development of novel cell motility models. We mainly use experimental data of the social amoeba Dictyostelium discoideum to illustrate and validate our approach. <br /> Author summary Amoeboid motion is a crawling-like cell migration that plays an important key role in multiple biological processes such as wound healing and cancer metastasis. This type of cell motility results from expanding and simultaneously contracting parts of the cell membrane. From fluorescence images, we obtain a sequence of points, representing the cell membrane, for each time step. By using regression analysis on these sequences, we derive smooth representations, so-called contours, of the membrane. Since the number of measurements is discrete and often limited, the question is raised of how to link consecutive contours with each other. In this work, we present a novel mathematical framework in which these links are described by regularized flows allowing a certain degree of concentration or stretching of neighboring reference points on the same contour. This stretching rate, the so-called local dispersion, is used to identify expansions and contractions of the cell membrane providing a fully automated way of extracting properties of these cell shape changes. We applied our methods to time-lapse microscopy data of the social amoeba Dictyostelium discoideum.
We propose a global geomagnetic field model for the last 14 thousand years, based on thermoremanent records. We call the model ArchKalmag14k. ArchKalmag14k is constructed by modifying recently proposed algorithms, based on space-time correlations. Due to the amount of data and complexity of the model, the full Bayesian posterior is numerically intractable. To tackle this, we sequentialize the inversion by implementing a Kalman-filter with a fixed time step. Every step consists of a prediction, based on a degree dependent temporal covariance, and a correction via Gaussian process regression. Dating errors are treated via a noisy input formulation. Cross correlations are reintroduced by a smoothing algorithm and model parameters are inferred from the data. Due to the specific statistical nature of the proposed algorithms, the model comes with space and time-dependent uncertainty estimates. The new model ArchKalmag14k shows less variation in the large-scale degrees than comparable models. Local predictions represent the underlying data and agree with comparable models, if the location is sampled well. Uncertainties are bigger for earlier times and in regions of sparse data coverage. We also use ArchKalmag14k to analyze the appearance and evolution of the South Atlantic anomaly together with reverse flux patches at the core-mantle boundary, considering the model uncertainties. While we find good agreement with earlier models for recent times, our model suggests a different evolution of intensity minima prior to 1650 CE. In general, our results suggest that prior to 6000 BCE the data is not sufficient to support global models.
In a previous study, a new snapshot modeling concept for the archeomagnetic field was introduced (Mauerberger et al., 2020, ). By assuming a Gaussian process for the geomagnetic potential, a correlation-based algorithm was presented, which incorporates a closed-form spatial correlation function. This work extends the suggested modeling strategy to the temporal domain. A space-time correlation kernel is constructed from the tensor product of the closed-form spatial correlation kernel with a squared exponential kernel in time. Dating uncertainties are incorporated into the modeling concept using a noisy input Gaussian process. All but one modeling hyperparameters are marginalized, to reduce their influence on the outcome and to translate their variability to the posterior variance. The resulting distribution incorporates uncertainties related to dating, measurement and modeling process. Results from application to archeomagnetic data show less variation in the dipole than comparable models, but are in general agreement with previous findings.
In this study we analyse the error distribution in regional models of the geomagnetic field. Our main focus is to investigate the distribution of errors when combining two regional patches to obtain a global field from regional ones. To simulate errors in overlapping patches we choose two different data region shapes that resemble that scenario. First, we investigate the errors in elliptical regions and secondly we choose a region obtained from two overlapping circular spherical caps. We conduct a Monte-Carlo simulation using synthetic data to obtain the expected mean errors. For the elliptical regions the results are similar to the ones obtained for circular spherical caps: the maximum error at the boundary decreases towards the centre of the region. A new result emerges as errors at the boundary vary with azimuth, being largest in the major axis direction and minimal in the minor axis direction. Inside the region there is an error decay towards a minimum at the centre at a rate similar to the one in circular regions. In the case of two combined circular regions there is also an error decay from the boundary towards the centre. The minimum error occurs at the centre of the combined regions. The maximum error at the boundary occurs on the line containing the two cap centres, the minimum in the perpendicular direction where the two circular cap boundaries meet. The large errors at the boundary are eliminated by combining regional patches. We propose an algorithm for finding the boundary region that is applicable to irregularly shaped model regions.
High-precision observations of the present-day geomagnetic field by ground-based observatories and satellites provide unprecedented conditions for unveiling the dynamics of the Earth’s core. Combining geomagnetic observations with dynamo simulations in a data assimilation (DA) framework allows the reconstruction of past and present states of the internal core dynamics. The essential information that couples the internal state to the observations is provided by the statistical correlations from a numerical dynamo model in the form of a model covariance matrix. Here we test a sequential DA framework, working through a succession of forecast and analysis steps, that extracts the correlations from an ensemble of dynamo models. The primary correlations couple variables of the same azimuthal wave number, reflecting the predominant axial symmetry of the magnetic field. Synthetic tests show that the scheme becomes unstable when confronted with high-precision geomagnetic observations. Our study has identified spurious secondary correlations as the origin of the problem. Keeping only the primary correlations by localizing the covariance matrix with respect to the azimuthal wave number suffices to stabilize the assimilation. While the first analysis step is fundamental in constraining the large-scale interior state, further assimilation steps refine the smaller and more dynamical scales. This refinement turns out to be critical for long-term geomagnetic predictions. Increasing the assimilation steps from one to 18 roughly doubles the prediction horizon for the dipole from about tree to six centuries, and from 30 to about 60 yr for smaller observable scales. This improvement is also reflected on the predictability of surface intensity features such as the South Atlantic Anomaly. Intensity prediction errors are decreased roughly by a half when assimilating long observation sequences.
The problem of estimating the maximum possible earthquake magnitude m(max) has attracted growing attention in recent years. Due to sparse data, the role of uncertainties becomes crucial. In this work, we determine the uncertainties related to the maximum magnitude in terms of confidence intervals. Using an earthquake catalog of Iran, m(max) is estimated for different predefined levels of confidence in six seismotectonic zones. Assuming the doubly truncated Gutenberg-Richter distribution as a statistical model for earthquake magnitudes, confidence intervals for the maximum possible magnitude of earthquakes are calculated in each zone. While the lower limit of the confidence interval is the magnitude of the maximum observed event, the upper limit is calculated from the catalog and the statistical model. For this aim, we use the original catalog which no declustering methods applied on as well as a declustered version of the catalog. Based on the study by Holschneider et al. (Bull Seismol Soc Am 101(4): 1649-1659, 2011), the confidence interval for m(max) is frequently unbounded, especially if high levels of confidence are required. In this case, no information is gained from the data. Therefore, we elaborate for which settings finite confidence levels are obtained. In this work, Iran is divided into six seismotectonic zones, namely Alborz, Azerbaijan, Zagros, Makran, Kopet Dagh, Central Iran. Although calculations of the confidence interval in Central Iran and Zagros seismotectonic zones are relatively acceptable for meaningful levels of confidence, results in Kopet Dagh, Alborz, Azerbaijan and Makran are not that much promising. The results indicate that estimating mmax from an earthquake catalog for reasonable levels of confidence alone is almost impossible.
We describe a new, original approach to the modelling of the Earth's magnetic field. The overall objective of this study is to reliably render fast variations of the core field and its secular variation. This method combines a sequential modelling approach, a Kalman filter, and a correlation-based modelling step. Sources that most significantly contribute to the field measured at the surface of the Earth are modelled. Their separation is based on strong prior information on their spatial and temporal behaviours. We obtain a time series of model distributions which display behaviours similar to those of recent models based on more classic approaches, particularly at large temporal and spatial scales. Interesting new features and periodicities are visible in our models at smaller time and spatial scales. An important aspect of our method is to yield reliable error bars for all model parameters. These errors, however, are only as reliable as the description of the different sources and the prior information used are realistic. Finally, we used a slightly different version of our method to produce candidate models for the thirteenth edition of the International Geomagnetic Reference Field.
Standing stocks are typically easier to measure than process rates such as production. Hence, stocks are often used as indicators of ecosystem functions although the latter are generally more strongly related to rates than to stocks. The regulation of stocks and rates and thus their variability over time may differ, as stocks constitute the net result of production and losses. Based on long-term high frequency measurements in a large, deep lake we explore the variability patterns in primary and bacterial production and relate them to those of the corresponding standing stocks, i.e. chlorophyll concentration, phytoplankton and bacterial biomass. We employ different methods (coefficient of variation, spline fitting and spectral analysis) which complement each other for assessing the variability present in the plankton data, at different temporal scales. In phytoplankton, we found that the overall variability of primary production is dominated by fluctuations at low frequencies, such as the annual, whereas in stocks and chlorophyll in particular, higher frequencies contribute substantially to the overall variance. This suggests that using standing stocks instead of rate measures leads to an under- or overestimation of food shortage for consumers during distinct periods of the year. The range of annual variation in bacterial production is 8 times greater than biomass, showing that the variability of bacterial activity (e.g. oxygen consumption, remineralisation) would be underestimated if biomass is used. The P/B ratios were variable and although clear trends are present in both bacteria and phytoplankton, no systematic relationship between stock and rate measures were found for the two groups. Hence, standing stock and process rate measures exhibit different variability patterns and care is needed when interpreting the mechanisms and implications of the variability encountered.
The additional magnetic field produced by the ionospheric current system is a part of the Earth’s magnetic field. This current system is a highly variable part of a global electric circuit. The solar wind and interplanetary magnetic field (IMF) interaction with the Earth’s magnetosphere is the external driver for the global electric circuit in the ionosphere. The energy is transferred via the field-aligned currents (FACs) to the Earth’s ionosphere. The interactions between the neutral and charged particles in the ionosphere lead to the so-called thermospheric neutral wind dynamo which represents the second important driver for the global current system. Both processes are components of the magnetosphere–ionosphere–thermosphere (MIT) system, which depends on solar and geomagnetic conditions, and have significant seasonal and UT variations.
The modeling of the global dynamic Earth’s ionospheric current system is the first aim of this investigation. For our study, we use the Potsdam version of the Upper Atmosphere Model (UAM-P). The UAM is a first-principle, time-dependent, and fully self-consistent numerical global model. The model includes the thermosphere, ionosphere, plasmasphere, and inner magnetosphere as well as the electrodynamics of the coupled MIT system for the altitudinal range from 80 (60) km up to the 15 Earth radii. The UAM-P differs from the UAM by a new electric field block. For this study, the lower latitudinal and equatorial electrodynamics of the UAM-P model was improved.
The calculation of the ionospheric current system’s contribution to the Earth’s magnetic field is the second aim of this study. We present the method, which allows computing the additional magnetic field inside and outside the current layer as generated by the space current density distribution using the Biot-Savart law. Additionally, we perform a comparison of the additional magnetic field calculation using 2D (equivalent currents) and 3D current distribution.
The magnetosphere-ionosphere-thermosphere (MIT) dynamic system significantly depends on the highly variable solar wind conditions, in particular, on changes of the strength and orientation of the interplanetary magnetic field (IMF). The solar wind and IMF interactions with the magnetosphere drive the MIT system via the magnetospheric field-aligned currents (FACs). The global modeling helps us to understand the physical background of this complex system. With the present study, we test the recently developed high-resolution empirical model of field-aligned currents MFACE (a high-resolution Model of Field-Aligned Currents through Empirical orthogonal functions analysis). These FAC distributions were used as input of the time-dependent, fully self-consistent global Upper Atmosphere Model (UAM) for different seasons and various solar wind and IMF conditions. The modeling results for neutral mass density and thermospheric wind are directly compared with the CHAMP satellite measurements. In addition, we perform comparisons with the global empirical models: the thermospheric wind model (HWM07) and the atmosphere density model (Naval Research Laboratory Mass Spectrometer and Incoherent Scatter Extended 2000). The theoretical model shows a good agreement with the satellite observations and an improved behavior compared with the empirical models at high latitudes. Using the MFACE model as input parameter of the UAM model, we obtain a realistic distribution of the upper atmosphere parameters for the Northern and Southern Hemispheres during stable IMF orientation as well as during dynamic situations. This variant of the UAM can therefore be used for modeling the MIT system and space weather predictions.
Wavelet modelling of the gravity field by domain decomposition methods: an example over Japan
(2011)
With the advent of satellite gravity, large gravity data sets of unprecedented quality at low and medium resolution become available. For local, high resolution field modelling, they need to be combined with the surface gravity data. Such models are then used for various applications, from the study of the Earth interior to the determination of oceanic currents. Here we show how to realize such a combination in a flexible way using spherical wavelets and applying a domain decomposition approach. This iterative method, based on the Schwarz algorithms, allows to split a large problem into smaller ones, and avoids the calculation of the entire normal system, which may be huge if high resolution is sought over wide areas. A subdomain is defined as the harmonic space spanned by a subset of the wavelet family. Based on the localization properties of the wavelets in space and frequency, we define hierarchical subdomains of wavelets at different scales. On each scale, blocks of subdomains are defined by using a tailored spatial splitting of the area. The data weighting and regularization are iteratively adjusted for the subdomains, which allows to handle heterogeneity in the data quality or the gravity variations. Different levels of approximations of the subdomains normals are also introduced, corresponding to building local averages of the data at different resolution levels.
We first provide the theoretical background on domain decomposition methods. Then, we validate the method with synthetic data, considering two kinds of noise: white noise and coloured noise. We then apply the method to data over Japan, where we combine a satellite-based geopotential model, EIGEN-GL04S, and a local gravity model from a combination of land and marine gravity data and an altimetry-derived marine gravity model. A hybrid spherical harmonics/wavelet model of the geoid is obtained at about 15 km resolution and a corrector grid for the surface model is derived.
[ 1] In this paper, we discuss the origin of superswell volcanism on the basis of representation and analysis of recent gravity and magnetic satellite data with wavelets in spherical geometry. We computed a refined gravity field in the south central Pacific based on the GRACE satellite GGM02S global gravity field and the KMS02 altimetric grid, and a magnetic anomaly field based on CHAMP data. The magnetic anomalies are marked by the magnetic lineation of the seafloor spreading and by a strong anomaly in the Tuamotu region, which we interpret as evidence for crustal thickening. We interpret our gravity field through a continuous wavelet analysis that allows to get a first idea of the internal density distribution. We also compute the continuous wavelet analysis of the bathymetric contribution to discriminate between deep and superficial sources. According to the gravity signature of the different chains as revealed by our analysis, various processes are at the origin of the volcanism in French Polynesia. As evidence, we show a large-scale anomaly over the Society Islands that we interpret as the gravity signature of a deeply anchored mantle plume. The gravity signature of the Cook-Austral chain indicates a complex origin which may involve deep processes. Finally, we discuss the particular location of the Marquesas chain as suggesting that the origin of the volcanism may interfere with secondary convection rolls or may be controlled by lithospheric weakness due to the regional stress field, or else related to the presence of the nearby Tuamotu plateau.
Aftershocks rates seem to follow a power law decay, but the question of the aftershock frequency immediately after an earthquake remains open. We estimate an average aftershock decay rate within one day in southern California by stacking in time different sequences triggered by main shocks ranging in magnitude from 2.5 to 4.5. Then we estimate the time delay before the onset of the power law aftershock decay rate. For the last 20 years, we observe that this time delay suddenly increase after large earthquakes, and slowly decreases at a constant rate during periods of low seismicity. In a band-limited power law model such variations can be explained by different patterns of stress distribution at different stages of the seismic cycle. We conclude that, on regional length scales, the brittle upper crust exhibits a collective behavior reflecting to some extent the proximity of a threshold of fracturing
The spatio-temporal epidemic type aftershock sequence (ETAS) model is widely used to describe the self-exciting nature of earthquake occurrences. While traditional inference methods provide only point estimates of the model parameters, we aim at a fully Bayesian treatment of model inference, allowing naturally to incorporate prior knowledge and uncertainty quantification of the resulting estimates. Therefore, we introduce a highly flexible, non-parametric representation for the spatially varying ETAS background intensity through a Gaussian process (GP) prior. Combined with classical triggering functions this results in a new model formulation, namely the GP-ETAS model. We enable tractable and efficient Gibbs sampling by deriving an augmented form of the GP-ETAS inference problem. This novel sampling approach allows us to assess the posterior model variables conditioned on observed earthquake catalogues, i.e., the spatial background intensity and the parameters of the triggering function. Empirical results on two synthetic data sets indicate that GP-ETAS outperforms standard models and thus demonstrate the predictive power for observed earthquake catalogues including uncertainty quantification for the estimated parameters. Finally, a case study for the l'Aquila region, Italy, with the devastating event on 6 April 2009, is presented.
The motility of adherent eukaryotic cells is driven by the dynamics of the actin cytoskeleton. Despite the common force-generating actin machinery, different cell types often show diverse modes of locomotion that differ in their shape dynamics, speed, and persistence of motion. Recently, experiments in Dictyostelium discoideum have revealed that different motility modes can be induced in this model organism, depending on genetic modifications, developmental conditions, and synthetic changes of intracellular signaling. Here, we report experimental evidence that in a mutated D. discoideum cell line with increased Ras activity, switches between two distinct migratory modes, the amoeboid and fan-shaped type of locomotion, can even spontaneously occur within the same cell. We observed and characterized repeated and reversible switchings between the two modes of locomotion, suggesting that they are distinct behavioral traits that coexist within the same cell. We adapted an established phenomenological motility model that combines a reaction-diffusion system for the intracellular dynamics with a dynamic phase field to account for our experimental findings.
The satellite era brings new challenges in the development and the implementation of potential field models. Major aspects are, therefore, the exploitation of existing space- and ground-based gravity and magnetic data for the long-term. Moreover, a continuous and near real-time global monitoring of the Earth system, allows for a consistent integration and assimilation of these data into complex models of the Earth’s gravity and magnetic fields, which have to consider the constantly increasing amount of available data. In this paper we propose how to speed up the computation of the normal equation in potential filed modeling by using local multi-polar approximations of the modeling functions. The basic idea is to take advantage of the rather smooth behavior of the internal fields at the satellite altitude and to replace the full available gravity or magnetic data by a collection of local moments. We also investigate what are the optimal values for the free parameters of our method. Results from numerical experiments with spherical harmonic models based on both scalar gravity potential and magnetic vector data are presented and discussed. The new developed method clearly shows that very large datasets can be used in potential field modeling in a fast and more economic manner.
For the time stationary global geomagnetic field, a new modelling concept is presented. A Bayesian non-parametric approach provides realistic location dependent uncertainty estimates. Modelling related variabilities are dealt with systematically by making little subjective apriori assumptions. Rather than parametrizing the model by Gauss coefficients, a functional analytic approach is applied. The geomagnetic potential is assumed a Gaussian process to describe a distribution over functions. Apriori correlations are given by an explicit kernel function with non-informative dipole contribution. A refined modelling strategy is proposed that accommodates non-linearities of archeomagnetic observables: First, a rough field estimate is obtained considering only sites that provide full field vector records. Subsequently, this estimate supports the linearization that incorporates the remaining incomplete records. The comparison of results for the archeomagnetic field over the past 1000 yr is in general agreement with previous models while improved model uncertainty estimates are provided.
The Smoothing Spline ANOVA (SS-ANOVA) requires a specialized construction of basis and penalty terms in order to incorporate prior knowledge about the data to be fitted. Typically, one resorts to the most general approach using tensor product splines. This implies severe constraints on the correlation structure, i.e. the assumption of isotropy of smoothness can not be incorporated in general. This may increase the variance of the spline fit, especially if only a relatively small set of observations are given. In this article, we propose an alternative method that allows to incorporate prior knowledge without the need to construct specialized bases and penalties, allowing the researcher to choose the spline basis and penalty according to the prior knowledge of the observations rather than choosing them according to the analysis to be done. The two approaches are compared with an artificial example and with analyses of fixation durations during reading.
The Smoothing Spline ANOVA (SS-ANOVA) requires a specialized construction of basis and penalty terms in order to incorporate prior knowledge about the data to be fitted. Typically, one resorts to the most general approach using tensor product splines. This implies severe constraints on the correlation structure, i.e. the assumption of isotropy of smoothness can not be incorporated in general. This may increase the variance of the spline fit, especially if only a relatively small set of observations are given. In this article, we propose an alternative method that allows to incorporate prior knowledge without the need to construct specialized bases and penalties, allowing the researcher to choose the spline basis and penalty according to the prior knowledge of the observations rather than choosing them according to the analysis to be done. The two approaches are compared with an artificial example and with analyses of fixation durations during reading.
Amoebae explore their environment in a random way, unless external cues like, e. g., nutrients, bias their motion. Even in the absence of cues, however, experimental cell tracks show some degree of persistence. In this paper, we analyzed individual cell tracks in the framework of a linear mixed effects model, where each track is modeled by a fractional Brownian motion, i.e., a Gaussian process exhibiting a long-term correlation structure superposed on a linear trend. The degree of persistence was quantified by the Hurst exponent of fractional Brownian motion. Our analysis of experimental cell tracks of the amoeba Dictyostelium discoideum showed a persistent movement for the majority of tracks. Employing a sliding window approach, we estimated the variations of the Hurst exponent over time, which allowed us to identify points in time, where the correlation structure was distorted ("outliers"). Coarse graining of track data via down-sampling allowed us to identify the dependence of persistence on the spatial scale. While one would expect the (mode of the) Hurst exponent to be constant on different temporal scales due to the self-similarity property of fractional Brownian motion, we observed a trend towards stronger persistence for the down-sampled cell tracks indicating stronger persistence on larger time scales.
In this study we re-evaluate the estimation of the self-similarity exponent of fixational eye movements using Bayesian theory. Our analysis is based on a subsampling decomposition, which permits an analysis of the signal up to some scale factor. We demonstrate that our approach can be applied to simulated data from mathematical models of fixational eye movements to distinguish the models' properties reliably.
In this study we propose a Bayesian approach to the estimation of the Hurst exponent in terms of linear mixed models. Even for unevenly sampled signals and signals with gaps, our method is applicable. We test our method by using artificial fractional Brownian motion of different length and compare it with the detrended fluctuation analysis technique. The estimation of the Hurst exponent of a Rosenblatt process is shown as an example of an H-self-similar process with non-Gaussian dimensional distribution. Additionally, we perform an analysis with real data, the Dow-Jones Industrial Average closing values, and analyze its temporal variation of the Hurst exponent.
We consider a model based on the fractional Brownian motion under the influence of noise. We implement the Bayesian approach to estimate the Hurst exponent of the model. The robustness of the method to the noise intensity is tested using artificial data from fractional Brownian motion. We show that estimation of the parameters achieved when noise is considered explicitly in the model. Moreover, we identify the corresponding noise-amplitude level that allow to receive the correct estimation of the Hurst exponents in various cases.
This book aims at understanding the diversity of planetary and lunar magnetic fields and their interaction with the solar wind. A synergistic interdisciplinary approach combines newly developed tools for data acquisition and analysis, computer simulations of planetary interiors and dynamos, models of solar wind interaction, measurement of terrestrial rocks and meteorites, and laboratory investigations. The following chapters represent a selection of some of the scientific findings derived by the 22 projects within the DFG Priority Program Planetary Magnetism" (PlanetMag). This introductory chapter gives an overview of the individual following chapters, highlighting their role in the overall goals of the PlanetMag framework. The diversity of the different contributions reflects the wide range of magnetic phenomena in our solar system. From the program we have excluded magnetism of the sun, which is an independent broad research discipline, but include the interaction of the solar wind with planets and moons. Within the subsequent 13 chapters of this book, the authors review the field centered on their research topic within PlanetMag. Here we shortly introduce the content of all the subsequent chapters and outline the context in which they should be seen.
Preface
(2018)
From monthly mean observatory data spanning 1957-2014, geomagnetic field secular variation values were calculated by annual differences. Estimates of the spherical harmonic Gauss coefficients of the core field secular variation were then derived by applying a correlation based modelling. Finally, a Fourier transform was applied to the time series of the Gauss coefficients. This process led to reliable temporal spectra of the Gauss coefficients up to spherical harmonic degree 5 or 6, and down to periods as short as 1 or 2 years depending on the coefficient. We observed that a k(-2) slope, where k is the frequency, is an acceptable approximation for these spectra, with possibly an exception for the dipole field. The monthly estimates of the core field secular variation at the observatory sites also show that large and rapid variations of the latter happen. This is an indication that geomagnetic jerks are frequent phenomena and that significant secular variation signals at short time scales - i.e. less than 2 years, could still be extracted from data to reveal an unexplored part of the core dynamics.
The parameters of the nutations are now known with a good accuracy, and the theory accounts for most of their values. Dissipative friction at the core-mantle boundary (CMB) and at the inner core boundary is an important ingredient of the theory. Up to now, viscous coupling at a smooth interface and electromagnetic coupling have been considered. In some cases they appear hardly strong enough to account for the observations. We advocate here that the CMB has a small- scale roughness and estimate the dissipation resulting from the interaction of the fluid core motion with this topography. We conclude that it might be significant