Refine
Year of publication
Document Type
- Article (75)
- Monograph/Edited Volume (10)
- Other (3)
- Postprint (3)
- Conference Proceeding (1)
Is part of the Bibliography
- yes (92)
Keywords
- Geomagnetic field (3)
- Wavelet transform (3)
- Bayesian inference (2)
- Geopotential theory (2)
- Kalman filter (2)
- Probabilistic forecasting (2)
- Satellite geodesy (2)
- geomagnetic field (2)
- geomagnetic storm (2)
- magnetosphere (2)
- multiscale analysis (2)
- spectral exponent (2)
- AFM (1)
- Assimilation (1)
- Bayesian inversion (1)
- Confidence interval (1)
- Core dynamics (1)
- Core field (1)
- Correlation based modelling (1)
- D. discoideum (1)
- Daily gravity field (1)
- Data augmentation (1)
- DySEM (1)
- Dynamo: theories and simulations (1)
- Earthquake interaction (1)
- Earthquake modeling (1)
- FIB patterning (1)
- Forecasting and prediction (1)
- Fractal (1)
- Full rank matrix filters (1)
- GRACE (1)
- Gaussian process (1)
- Geomagnetic jerks (1)
- Geomagnetic storm (1)
- Geomagnetism (1)
- Gravity anomalies and Earth structure (1)
- Hadley-Walker Circulation (1)
- Hawkes process (1)
- IGRF (1)
- ITG-Grace2010 (1)
- Interpolation (1)
- Inverse theory (1)
- Kalman smoother (1)
- Level of confidence (1)
- Lithology (1)
- Machine learning (1)
- Magnetic anomalies: modelling and interpretation (1)
- Magnetic field variations through time (1)
- Magnetosphere (1)
- Maximum magnitude of earthquake (1)
- Multichannel wavelets (1)
- Multigrid (1)
- Multiple time stepping (1)
- Multiscale analysis (1)
- ODP 659 (1)
- ODP 721/722 (1)
- ODP 967 (1)
- Pacific Ocean (1)
- Plio-Pleistocene (1)
- Quadrature mirror filters (1)
- Regularity analysis (1)
- Sampling (1)
- Satellite magnetics (1)
- Secular variation (1)
- Secular variation rate of change (1)
- Self-exciting point process (1)
- Simulation of Gaussian processes (1)
- Spatio-temporal ETAS model (1)
- Spectral exponent (1)
- Statistical seismology (1)
- Strike-slip fault model (1)
- Subdivision schemes (1)
- Vector subdivision schemes (1)
- Well log (1)
- actin dynamics (1)
- amoeboid motility (1)
- archaeomagnetism (1)
- assimilation (1)
- asteroseismology (1)
- cell migration (1)
- climate transition (1)
- core flow (1)
- geopotential theory (1)
- inverse problem (1)
- inverse theory (1)
- keratocytle-like motility (1)
- length of day (1)
- magnetic field variations through (1)
- migration (1)
- modal analysis (1)
- modes of (1)
- palaeomagnetism (1)
- potential fields (gravity, geomagnetism) (1)
- prediction (1)
- satellite data (1)
- secular variation (1)
- size reduction (1)
- spherical harmonics (1)
- stars: early-type (1)
- stars: individual: Vega (1)
- stars: oscillations (1)
- stars: rotation (1)
- starspots (1)
- statistical methods (1)
- structured cantilever (1)
- time (1)
- time series analysis (1)
Institute
- Institut für Mathematik (41)
- Institut für Geowissenschaften (18)
- Institut für Physik und Astronomie (16)
- Institut für Biochemie und Biologie (6)
- Mathematisch-Naturwissenschaftliche Fakultät (3)
- Department Psychologie (2)
- Institut für Chemie (1)
- Institut für Informatik und Computational Science (1)
- Institut für Umweltwissenschaften und Geographie (1)
The inverse problem of determining the flow at the Earth's core-mantle boundary according to an outer core magnetic field and secular variation model has been investigated through a Bayesian formalism. To circumvent the issue arising from the truncated nature of the available fields, we combined two modeling methods. In the first step, we applied a filter on the magnetic field to isolate its large scales by reducing the energy contained in its small scales, we then derived the dynamical equation, referred as filtered frozen flux equation, describing the spatiotemporal evolution of the filtered part of the field. In the second step, we proposed a statistical parametrization of the filtered magnetic field in order to account for both its remaining unresolved scales and its large-scale uncertainties. These two modeling techniques were then included in the Bayesian formulation of the inverse problem. To explore the complex posterior distribution of the velocity field resulting from this development, we numerically implemented an algorithm based on Markov chain Monte Carlo methods. After evaluating our approach on synthetic data and comparing it to previously introduced methods, we applied it to a magnetic field model derived from satellite data for the single epoch 2005.0. We could confirm the existence of specific features already observed in previous studies. In particular, we retrieved the planetary scale eccentric gyre characteristic of flow evaluated under the compressible quasi-geostrophy assumption although this hypothesis was not considered in our study. In addition, through the sampling of the velocity field posterior distribution, we could evaluate the reliability, at any spatial location and at any scale, of the flow we calculated. The flow uncertainties we determined are nevertheless conditioned by the choice of the prior constraints we applied to the velocity field.
Bayesian selection of Markov Models for symbol sequences application to microsaccadic eye movements
(2012)
Complex biological dynamics often generate sequences of discrete events which can be described as a Markov process. The order of the underlying Markovian stochastic process is fundamental for characterizing statistical dependencies within sequences. As an example for this class of biological systems, we investigate the Markov order of sequences of microsaccadic eye movements from human observers. We calculate the integrated likelihood of a given sequence for various orders of the Markov process and use this in a Bayesian framework for statistical inference on the Markov order. Our analysis shows that data from most participants are best explained by a first-order Markov process. This is compatible with recent findings of a statistical coupling of subsequent microsaccade orientations. Our method might prove to be useful for a broad class of biological systems.
The problem of estimating the maximum possible earthquake magnitude m(max) has attracted growing attention in recent years. Due to sparse data, the role of uncertainties becomes crucial. In this work, we determine the uncertainties related to the maximum magnitude in terms of confidence intervals. Using an earthquake catalog of Iran, m(max) is estimated for different predefined levels of confidence in six seismotectonic zones. Assuming the doubly truncated Gutenberg-Richter distribution as a statistical model for earthquake magnitudes, confidence intervals for the maximum possible magnitude of earthquakes are calculated in each zone. While the lower limit of the confidence interval is the magnitude of the maximum observed event, the upper limit is calculated from the catalog and the statistical model. For this aim, we use the original catalog which no declustering methods applied on as well as a declustered version of the catalog. Based on the study by Holschneider et al. (Bull Seismol Soc Am 101(4): 1649-1659, 2011), the confidence interval for m(max) is frequently unbounded, especially if high levels of confidence are required. In this case, no information is gained from the data. Therefore, we elaborate for which settings finite confidence levels are obtained. In this work, Iran is divided into six seismotectonic zones, namely Alborz, Azerbaijan, Zagros, Makran, Kopet Dagh, Central Iran. Although calculations of the confidence interval in Central Iran and Zagros seismotectonic zones are relatively acceptable for meaningful levels of confidence, results in Kopet Dagh, Alborz, Azerbaijan and Makran are not that much promising. The results indicate that estimating mmax from an earthquake catalog for reasonable levels of confidence alone is almost impossible.
In this paper, we propose a method of surface waves characterization based on the deformation of the wavelet transform of the analysed signal. An estimate of the phase velocity (the group velocity) and the attenuation coefficient is carried out using a model-based approach to determine the propagation operator in the wavelet domain, which depends nonlinearly on a set of unknown parameters. These parameters explicitly define the phase velocity, the group velocity and the attenuation. Under the assumption that the difference between waveforms observed at a couple of stations is solely due to the dispersion characteristics and the intrinsic attenuation of the medium, we then seek to find the set of unknown parameters of this model. Finding the model parameters turns out to be that of an optimization problem, which is solved through the minimization of an appropriately defined cost function. We show that, unlike time-frequency methods that exploit only the square modulus of the transform, we can achieve a complete characterization of surface waves in a dispersive and attenuating medium. Using both synthetic examples and experimental data, we also show that it is in principle possible to separate different modes in both the time domain and the frequency domain
Characterization of polarization attributes of seismic waves using continuous wavelet transforms
(2006)
Complex-trace analysis is the method of choice for analyzing polarized data. Because particle motion can be represented by instantaneous attributes that show distinct features for waves of different polarization characteristics, it can be used to separate and characterize these waves. Traditional methods of complex-trace analysis only give the instantaneous attributes as a function of time or frequency. However. for transient wave types or seismic events that overlap in time, an estimate of the polarization parameters requires analysis of the time-frequency dependence of these attributes. We propose a method to map instantaneous polarization attributes of seismic signals in the wavelet domain and explicitly relate these attributes with the wavelet-transform coefficients of the analyzed signal. We compare our method with traditional complex-trace analysis using numerical examples. An advantage of our method is its possibility of performing the complete wave-mode separation/ filtering process in the wavelet domain and its ability to provide the frequency dependence of ellipticity, which contains important information on the subsurface structure. Furthermore, using 2-C synthetic and real seismic shot gathers, we show how to use the method to separate different wave types and identify zones of interfering wave modes
We describe an iterative method to combine seismicity forecasts. With this method, we produce the next generation of a starting forecast by incorporating predictive skill from one or more input forecasts. For a single iteration, we use the differential probability gain of an input forecast relative to the starting forecast. At each point in space and time, the rate in the next-generation forecast is the product of the starting rate and the local differential probability gain. The main advantage of this method is that it can produce high forecast rates using all types of numerical forecast models, even those that are not rate-based. Naturally, a limitation of this method is that the input forecast must have some information not already contained in the starting forecast. We illustrate this method using the Every Earthquake a Precursor According to Scale (EEPAS) and Early Aftershocks Statistics (EAST) models, which are currently being evaluated at the US testing center of the Collaboratory for the Study of Earthquake Predictability. During a testing period from July 2009 to December 2011 (with 19 target earthquakes), the combined model we produce has better predictive performance - in terms of Molchan diagrams and likelihood - than the starting model (EEPAS) and the input model (EAST). Many of the target earthquakes occur in regions where the combined model has high forecast rates. Most importantly, the rates in these regions are substantially higher than if we had simply averaged the models.
constraints
(2016)
Prior information in ill-posed inverse problem is of critical importance because it is conditioning the posterior solution and its associated variability. The problem of determining the flow evolving at the Earth's core-mantle boundary through magnetic field models derived from satellite or observatory data is no exception to the rule. This study aims to estimate what information can be extracted on the velocity field at the core-mantle boundary, when the frozen flux equation is inverted under very weakly informative, but realistic, prior constraints. Instead of imposing a converging spectrum to the flow, we simply assume that its poloidal and toroidal energy spectra are characterized by power laws. The parameters of the spectra, namely, their magnitudes, and slopes are unknown. The connection between the velocity field, its spectra parameters, and the magnetic field model is established through the Bayesian formulation of the problem. Working in two steps, we determined the time-averaged spectra of the flow within the 2001–2009.5 period, as well as the flow itself and its associated uncertainties in 2005.0. According to the spectra we obtained, we can conclude that the large-scale approximation of the velocity field is not an appropriate assumption within the time window we considered. For the flow itself, we show that although it is dominated by its equatorial symmetric component, it is very unlikely to be perfectly symmetric. We also demonstrate that its geostrophic state is questioned in different locations of the outer core.
For the time stationary global geomagnetic field, a new modelling concept is presented. A Bayesian non-parametric approach provides realistic location dependent uncertainty estimates. Modelling related variabilities are dealt with systematically by making little subjective apriori assumptions. Rather than parametrizing the model by Gauss coefficients, a functional analytic approach is applied. The geomagnetic potential is assumed a Gaussian process to describe a distribution over functions. Apriori correlations are given by an explicit kernel function with non-informative dipole contribution. A refined modelling strategy is proposed that accommodates non-linearities of archeomagnetic observables: First, a rough field estimate is obtained considering only sites that provide full field vector records. Subsequently, this estimate supports the linearization that incorporates the remaining incomplete records. The comparison of results for the archeomagnetic field over the past 1000 yr is in general agreement with previous models while improved model uncertainty estimates are provided.
In a previous study, a new snapshot modeling concept for the archeomagnetic field was introduced (Mauerberger et al., 2020, ). By assuming a Gaussian process for the geomagnetic potential, a correlation-based algorithm was presented, which incorporates a closed-form spatial correlation function. This work extends the suggested modeling strategy to the temporal domain. A space-time correlation kernel is constructed from the tensor product of the closed-form spatial correlation kernel with a squared exponential kernel in time. Dating uncertainties are incorporated into the modeling concept using a noisy input Gaussian process. All but one modeling hyperparameters are marginalized, to reduce their influence on the outcome and to translate their variability to the posterior variance. The resulting distribution incorporates uncertainties related to dating, measurement and modeling process. Results from application to archeomagnetic data show less variation in the dipole than comparable models, but are in general agreement with previous findings.
We introduce a technique for the modeling and separation of geomagnetic field components that is based on an analysis of their correlation structures alone. The inversion is based on a Bayesian formulation, which allows the computation of uncertainties. The technique allows the incorporation of complex measurement geometries like observatory data in a simple way. We show how our technique is linked to other well-known inversion techniques. A case study based on observational data is given.
The Gutenberg-Richter relation for earthquake magnitudes is the most famous empirical law in seismology. It states that the frequency of earthquake magnitudes follows an exponential distribution; this has been found to be a robust feature of seismicity above the completeness magnitude, and it is independent of whether global, regional, or local seismicity is analyzed. However, the exponent b of the distribution varies significantly in space and time, which is important for process understanding and seismic hazard assessment; this is particularly true because of the fact that the Gutenberg-Richter b-value acts as a proxy for the stress state and quantifies the ratio of large-to-small earthquakes. In our work, we focus on the automatic detection of statistically significant temporal changes of the b-value in seismicity data. In our approach, we use Bayes factors for model selection and estimate multiple change-points of the frequency-magnitude distribution in time. The method is first applied to synthetic data, showing its capability to detect change-points as function of the size of the sample and the b-value contrast. Finally, we apply this approach to examples of observational data sets for which b-value changes have previously been stated. Our analysis of foreshock and after-shock sequences related to mainshocks, as well as earthquake swarms, shows that only a portion of the b-value changes is statistically significant.
Change points in time series are perceived as isolated singularities where two regular trends of a given signal do not match. The detection of such transitions is of fundamental interest for the understanding of the system's internal dynamics or external forcings. In practice observational noise makes it difficult to detect such change points in time series. In this work we elaborate on a Bayesian algorithm to estimate the location of the singularities and to quantify their credibility. We validate the performance and sensitivity of our inference method by estimating change points of synthetic data sets. As an application we use our algorithm to analyze the annual flow volume of the Nile River at Aswan from 1871 to 1970, where we confirm a well-established significant transition point within the time series.
We construct a family of admissible analysis reconstruction pairs of wavelet families on the sphere. The construction is an extension of the isotropic Poisson wavelets. Similar to those, the directional wavelets allow a finite expansion in terms of off-center multipoles. Unlike the isotropic case, the directional wavelets are not a tight frame. However, at small scales, they almost behave like a tight frame. We give an explicit formula for the pseudodifferential operator given by the combination analysis-synthesis with respect to these wavelets. The Euclidean limit is shown to exist and an explicit formula is given. This allows us to quantify the asymptotic angular resolution of the wavelets.
Context. The theoretically studied impact of rapid rotation on stellar evolution needs to be compared with these results of high-resolution spectroscopy-velocimetry observations. Early-type stars present a perfect laboratory for these studies. The prototype A0 star Vega has been extensively monitored in recent years in spectropolarimetry. A weak surface magnetic field was detected, implying that there might be a (still undetected) structured surface. First indications of the presence of small amplitude stellar radial velocity variations have been reported recently, but the confirmation and in-depth study with the highly stabilized spectrograph SOPHIE/OHP was required.
Aims. The goal of this article is to present a thorough analysis of the line profile variations and associated estimators in the early-type standard star Vega (A0) in order to reveal potential activity tracers, exoplanet companions, and stellar oscillations.
Methods. Vega was monitored in quasi-continuous high-resolution echelle spectroscopy with the highly stabilized velocimeter SOPHIE/OHP. A total of 2588 high signal-to-noise spectra was obtained during 34.7 h on five nights (2 to 6 of August 2012) in high-resolution mode at R = 75 000 and covering the visible domain from 3895 6270 angstrom. For each reduced spectrum, least square deconvolved equivalent photospheric profiles were calculated with a T-eff = 9500 and log g = 4.0 spectral line mask. Several methods were applied to study the dynamic behaviour of the profile variations (evolution of radial velocity, bisectors, vspan, 2D profiles, amongst others).
Results. We present the discovery of a spotted stellar surface on an A-type standard star (Vega) with very faint spot amplitudes Delta F/Fc similar to 5 x 10(-4). A rotational modulation of spectral lines with a period of rotation P = 0.68 d has clearly been exhibited, unambiguously confirming the results of previous spectropolarimetric studies. Most of these brightness inhomogeneities seem to be located in lower equatorial latitudes. Either a very thin convective layer can be responsible for magnetic field generation at small amplitudes, or a new mechanism has to be invoked to explain the existence of activity tracing starspots. At this stage it is difficult to disentangle a rotational from a stellar pulsational origin for the existing higher frequency periodic variations.
Conclusions. This first strong evidence that standard A-type stars can show surface structures opens a new field of research and ask about a potential link with the recently discovered weak magnetic field discoveries in this category of stars.
The parameters of the nutations are now known with a good accuracy, and the theory accounts for most of their values. Dissipative friction at the core-mantle boundary (CMB) and at the inner core boundary is an important ingredient of the theory. Up to now, viscous coupling at a smooth interface and electromagnetic coupling have been considered. In some cases they appear hardly strong enough to account for the observations. We advocate here that the CMB has a small- scale roughness and estimate the dissipation resulting from the interaction of the fluid core motion with this topography. We conclude that it might be significant
We use a dynamic scanning electron microscope (DySEM) to map the spatial distribution of the vibration of a cantilever beam. The DySEM measurements are based on variations of the local secondary electron signal within the imaging electron beam diameter during an oscillation period of the cantilever. For this reason, the surface of a cantilever without topography or material variation does not allow any conclusions about the spatial distribution of vibration due to a lack of dynamic contrast. In order to overcome this limitation, artificial structures were added at defined positions on the cantilever surface using focused ion beam lithography patterning. The DySEM signal of such high-contrast structures is strongly improved, hence information about the surface vibration becomes accessible. Simulations of images of the vibrating cantilever have also been performed. The results of the simulation are in good agreement with the experimental images.
In this study we analyse the error distribution in regional models of the geomagnetic field. Our main focus is to investigate the distribution of errors when combining two regional patches to obtain a global field from regional ones. To simulate errors in overlapping patches we choose two different data region shapes that resemble that scenario. First, we investigate the errors in elliptical regions and secondly we choose a region obtained from two overlapping circular spherical caps. We conduct a Monte-Carlo simulation using synthetic data to obtain the expected mean errors. For the elliptical regions the results are similar to the ones obtained for circular spherical caps: the maximum error at the boundary decreases towards the centre of the region. A new result emerges as errors at the boundary vary with azimuth, being largest in the major axis direction and minimal in the minor axis direction. Inside the region there is an error decay towards a minimum at the centre at a rate similar to the one in circular regions. In the case of two combined circular regions there is also an error decay from the boundary towards the centre. The minimum error occurs at the centre of the combined regions. The maximum error at the boundary occurs on the line containing the two cap centres, the minimum in the perpendicular direction where the two circular cap boundaries meet. The large errors at the boundary are eliminated by combining regional patches. We propose an algorithm for finding the boundary region that is applicable to irregularly shaped model regions.
We consider a model based on the fractional Brownian motion under the influence of noise. We implement the Bayesian approach to estimate the Hurst exponent of the model. The robustness of the method to the noise intensity is tested using artificial data from fractional Brownian motion. We show that estimation of the parameters achieved when noise is considered explicitly in the model. Moreover, we identify the corresponding noise-amplitude level that allow to receive the correct estimation of the Hurst exponents in various cases.
We discuss to what extent a given earthquake catalog and the assumption of a doubly truncated Gutenberg-Richter distribution for the earthquake magnitudes allow for the calculation of confidence intervals for the maximum possible magnitude M. We show that, without further assumptions such as the existence of an upper bound of M, only very limited information may be obtained. In a frequentist formulation, for each confidence level alpha the confidence interval diverges with finite probability. In a Bayesian formulation, the posterior distribution of the upper magnitude is not normalizable. We conclude that the common approach to derive confidence intervals from the variance of a point estimator fails. Technically, this problem can be overcome by introducing an upper bound (M) over tilde for the maximum magnitude. Then the Bayesian posterior distribution can be normalized, and its variance decreases with the number of observed events. However, because the posterior depends significantly on the choice of the unknown value of (M) over tilde, the resulting confidence intervals are essentially meaningless. The use of an informative prior distribution accounting for pre-knowledge of M is also of little use, because the prior is only modified in the case of the occurrence of an extreme event. Our results suggest that the maximum possible magnitude M should be better replaced by M(T), the maximum expected magnitude in a given time interval T, for which the calculation of exact confidence intervals becomes straightforward. From a physical point of view, numerical models of the earthquake process adjusted to specific fault regions may be a powerful alternative to overcome the shortcomings of purely statistical inference.
This paper is concerned with localization properties of coherent states. Instead of classical uncertainty relations we consider "generalized" localization quantities. This is done by introducing measures on the reproducing kernel. In this context we may prove the existence of optimally localized states. Moreover, we provide a numerical scheme for deriving them.
We explore fluctuations of the horizontal component of the Earth's magnetic field to identify scaling behaviour of the temporal variability in geomagnetic data recorded by the Intermagnet observatories during the solar cycle 23 (years 1996 to 2005). In this work, we use the remarkable ability of scaling wavelet exponents to highlight the singularities associated with discontinuities present in the magnetograms obtained at two magnetic observatories for six intense magnetic storms, including the sudden storm commencements of 14 July 2000, 29-31 October and 20-21 November 2003. In the active intervals that occurred during geomagnetic storms, we observe a rapid and unidirectional change in the spectral scaling exponent at the time of storm onset. The corresponding fractal features suggest that the dynamics of the whole time series is similar to that of a fractional Brownian motion. Our findings point to an evident relatively sudden change related to the emergence of persistency of the fractal power exponent fluctuations precedes an intense magnetic storm. These first results could be useful in the framework of extreme events prediction studies.
We propose a conversion method from alarm-based to rate-based earthquake forecast models. A differential probability gain g(alarm)(ref) is the absolute value of the local slope of the Molchan trajectory that evaluates the performance of the alarm-based model with respect to the chosen reference model. We consider that this differential probability gain is constant over time. Its value at each point of the testing region depends only on the alarm function value. The rate-based model is the product of the event rate of the reference model at this point multiplied by the corresponding differential probability gain. Thus, we increase or decrease the initial rates of the reference model according to the additional amount of information contained in the alarm-based model. Here, we apply this method to the Early Aftershock STatistics (EAST) model, an alarm-based model in which early aftershocks are used to identify space-time regions with a higher level of stress and, consequently, a higher seismogenic potential. The resulting rate-based model shows similar performance to the original alarm-based model for all ranges of earthquake magnitude in both retrospective and prospective tests. This conversion method offers the opportunity to perform all the standard evaluation tests of the earthquake testing centers on alarm-based models. In addition, we infer that it can also be used to consecutively combine independent forecast models and, with small modifications, seismic hazard maps with short- and medium-term forecasts.
This book aims at understanding the diversity of planetary and lunar magnetic fields and their interaction with the solar wind. A synergistic interdisciplinary approach combines newly developed tools for data acquisition and analysis, computer simulations of planetary interiors and dynamos, models of solar wind interaction, measurement of terrestrial rocks and meteorites, and laboratory investigations. The following chapters represent a selection of some of the scientific findings derived by the 22 projects within the DFG Priority Program Planetary Magnetism" (PlanetMag). This introductory chapter gives an overview of the individual following chapters, highlighting their role in the overall goals of the PlanetMag framework. The diversity of the different contributions reflects the wide range of magnetic phenomena in our solar system. From the program we have excluded magnetism of the sun, which is an independent broad research discipline, but include the interaction of the solar wind with planets and moons. Within the subsequent 13 chapters of this book, the authors review the field centered on their research topic within PlanetMag. Here we shortly introduce the content of all the subsequent chapters and outline the context in which they should be seen.
The spatio-temporal epidemic type aftershock sequence (ETAS) model is widely used to describe the self-exciting nature of earthquake occurrences. While traditional inference methods provide only point estimates of the model parameters, we aim at a fully Bayesian treatment of model inference, allowing naturally to incorporate prior knowledge and uncertainty quantification of the resulting estimates. Therefore, we introduce a highly flexible, non-parametric representation for the spatially varying ETAS background intensity through a Gaussian process (GP) prior. Combined with classical triggering functions this results in a new model formulation, namely the GP-ETAS model. We enable tractable and efficient Gibbs sampling by deriving an augmented form of the GP-ETAS inference problem. This novel sampling approach allows us to assess the posterior model variables conditioned on observed earthquake catalogues, i.e., the spatial background intensity and the parameters of the triggering function. Empirical results on two synthetic data sets indicate that GP-ETAS outperforms standard models and thus demonstrate the predictive power for observed earthquake catalogues including uncertainty quantification for the estimated parameters. Finally, a case study for the l'Aquila region, Italy, with the devastating event on 6 April 2009, is presented.
Different GRACE data analysis centers provide temporal variations of the Earth's gravity field as monthly, 10-daily or weekly solutions. These temporal mean fields cannot model the variations occurring during the respective time span. The aim of our approach is to extract as much temporal information as possible out of the given GRACE data. Therefore the temporal resolution shall be increased with the goal to derive daily snapshots. Yet, such an increase in temporal resolution is accompanied by a loss of redundancy and therefore in a reduced accuracy if the daily solutions are calculated individually. The approach presented here therefore introduces spatial and temporal correlations of the expected gravity field signal derived from geophysical models in addition to the daily observations, thus effectively constraining the spatial and temporal evolution of the GRACE solution. The GRACE data processing is then performed within the framework of a Kalman filter and smoother estimation procedure.
The approach is at first investigated in a closed-loop simulation scenario and then applied to the original GRACE observations (level-1B data) to calculate daily solutions as part of the gravity field model ITG-Grace2010. Finally, the daily models are compared to vertical GPS station displacements and ocean bottom pressure observations.
From these comparisons it can be concluded that particular in higher latitudes the daily solutions contain high-frequent temporal gravity field information and represent an improvement to existing geophysical models.
The injection of fluids is a well-known origin for the triggering of earthquake sequences. The growing number of projects related to enhanced geothermal systems, fracking, and others has led to the question, which maximum earthquake magnitude can be expected as a consequence of fluid injection? This question is addressed from the perspective of statistical analysis. Using basic empirical laws of earthquake statistics, we estimate the magnitude M-T of the maximum expected earthquake in a predefined future time window T-f. A case study of the fluid injection site at Paradox Valley, Colorado, demonstrates that the magnitude m 4.3 of the largest observed earthquake on 27 May 2000 lies very well within the expectation from past seismicity without adjusting any parameters. Vice versa, for a given maximum tolerable earthquake at an injection site, we can constrain the corresponding amount of injected fluids that must not be exceeded within predefined confidence bounds.
We introduce a method for computing instantaneous-polarization attributes from multicomponent signals. This is an improvement on the standard covariance method (SCM) because it does not depend on the window size used to compute the standard covariance matrix. We overcome the window-size problem by deriving an approximate analytical formula for the cross-energy matrix in which we automatically and adaptively determine the time window. The proposed method uses polarization analysis as applied to multicomponent seismic by waveform separation and filtering.
We introduce a method of wavefield separation from multicomponent data sets based on the use of the continuous wavelet transform. Our method is a further generalization of the approach proposed by Morozov and Smithson, in that by using the continuous wavelet transform, we can achieve a better separation of wave types by designing the filter in the time-frequency domain. Furthermore, using the instantaneous polarization attributes defined in the wavelet domain, we show how to construct filters tailored to separate different wave types (elliptically or linearly polarized), followed by an inverse wavelet transform to obtain the desired wave type in the time domain. Using synthetic and experimental data, we show how the present method can be used for wavefield separation
In this paper we present a Bayesian framework for interpolating data in a reproducing kernel Hilbert space associated with a random subdivision scheme, where not only approximations of the values of a function at some missing points can be obtained, but also uncertainty estimates for such predicted values. This random scheme generalizes the usual subdivision by taking into account, at each level, some uncertainty given in terms of suitably scaled noise sequences of i.i.d. Gaussian random variables with zero mean and given variance, and generating, in the limit, a Gaussian process whose correlation structure is characterized and used for computing realizations of the conditional posterior distribution. The hierarchical nature of the procedure may be exploited to reduce the computational cost compared to standard techniques in the case where many prediction points need to be considered.
The Coulomb failure stress (CFS) criterion is the most commonly used method for predicting spatial distributions of aftershocks following large earthquakes. However, large uncertainties are always associated with the calculation of Coulomb stress change. The uncertainties mainly arise due to nonunique slip inversions and unknown receiver faults; especially for the latter, results are highly dependent on the choice of the assumed receiver mechanism. Based on binary tests (aftershocks yes/no), recent studies suggest that alternative stress quantities, a distance-slip probabilistic model as well as deep neural network (DNN) approaches, all are superior to CFS with predefined receiver mechanism. To challenge this conclusion, which might have large implications, we use 289 slip inversions from SRCMOD database to calculate more realistic CFS values for a layered half-space and variable receiver mechanisms. We also analyze the effect of the magnitude cutoff, grid size variation, and aftershock duration to verify the use of receiver operating characteristic (ROC) analysis for the ranking of stress metrics. The observations suggest that introducing a layered half-space does not improve the stress maps and ROC curves. However, results significantly improve for larger aftershocks and shorter time periods but without changing the ranking. We also go beyond binary testing and apply alternative statistics to test the ability to estimate aftershock numbers, which confirm that simple stress metrics perform better than the classic Coulomb failure stress calculations and are also better than the distance-slip probabilistic model.
The satellite era brings new challenges in the development and the implementation of potential field models. Major aspects are, therefore, the exploitation of existing space- and ground-based gravity and magnetic data for the long-term. Moreover, a continuous and near real-time global monitoring of the Earth system, allows for a consistent integration and assimilation of these data into complex models of the Earth’s gravity and magnetic fields, which have to consider the constantly increasing amount of available data. In this paper we propose how to speed up the computation of the normal equation in potential filed modeling by using local multi-polar approximations of the modeling functions. The basic idea is to take advantage of the rather smooth behavior of the internal fields at the satellite altitude and to replace the full available gravity or magnetic data by a collection of local moments. We also investigate what are the optimal values for the free parameters of our method. Results from numerical experiments with spherical harmonic models based on both scalar gravity potential and magnetic vector data are presented and discussed. The new developed method clearly shows that very large datasets can be used in potential field modeling in a fast and more economic manner.
Borehole logs provide geological information about the rocks crossed by the wells. Several properties of rocks can be interpreted in terms of lithology, type and quantity of the fluid filling the pores and fractures. Here, the logs are assumed to be nonhomogeneous Brownian motions (nhBms) which are generalized fractional Brownian motions (fBms) indexed by depth-dependent Hurst parameters H(z). Three techniques, the local wavelet approach (LWA), the average-local wavelet approach (ALWA), and Peltier Algorithm (PA), are suggested to estimate the Hurst functions (or the regularity profiles) from the logs. First, two synthetic sonic logs with different parameters, shaped by the successive random additions (SRA) algorithm, are used to demonstrate the potential of the proposed methods. The obtained Hurst functions are close to the theoretical Hurst functions. Besides, the transitions between the modeled layers are marked by Hurst values discontinuities. It is also shown that PA leads to the best Hurst value estimations. Second, we investigate the multifractional property of sonic logs data recorded at two scientific deep boreholes: the pilot hole VB and the ultra deep main hole HB, drilled for the German Continental Deep Drilling Program (KTB). All the regularity profiles independently obtained for the logs provide a clear correlation with lithology, and from each regularity profile, we derive a similar segmentation in terms of lithological units. The lithological discontinuities (strata' bounds and faults contacts) are located at the local extrema of the Hurst functions. Moreover, the regularity profiles are compared with the KTB estimated porosity logs, showing a significant relation between the local extrema of the Hurst functions and the fluid-filled fractures. The Hurst function may then constitute a tool to characterize underground heterogeneities.
We propose a reduced dynamical system describing the coupled evolution of fluid flow and magnetic field at the top of the Earth's core between the years 1900 and 2014. The flow evolution is modeled with a first-order autoregressive process, while the magnetic field obeys the classical frozen flux equation. An ensemble Kalman filter algorithm serves to constrain the dynamics with the geomagnetic field and its secular variation given by the COV-OBS.x1 model. Using a large ensemble with 40,000 members provides meaningful statistics including reliable error estimates. The model highlights two distinct flow scales. Slowly varying large-scale elements include the already documented eccentric gyre. Localized short-lived structures include distinctly ageostophic features like the high-latitude polar jet on the Northern Hemisphere. Comparisons with independent observations of the length-of-day variations not only validate the flow estimates but also suggest an acceleration of the geostrophic flows over the last century. Hindcasting tests show that our model outperforms simpler predictions bases (linear extrapolation and stationary flow). The predictability limit, of about 2,000 years for the magnetic dipole component, is mostly determined by the random fast varying dynamics of the flow and much less by the geomagnetic data quality or lack of small-scale information.
The additional magnetic field produced by the ionospheric current system is a part of the Earth’s magnetic field. This current system is a highly variable part of a global electric circuit. The solar wind and interplanetary magnetic field (IMF) interaction with the Earth’s magnetosphere is the external driver for the global electric circuit in the ionosphere. The energy is transferred via the field-aligned currents (FACs) to the Earth’s ionosphere. The interactions between the neutral and charged particles in the ionosphere lead to the so-called thermospheric neutral wind dynamo which represents the second important driver for the global current system. Both processes are components of the magnetosphere–ionosphere–thermosphere (MIT) system, which depends on solar and geomagnetic conditions, and have significant seasonal and UT variations.
The modeling of the global dynamic Earth’s ionospheric current system is the first aim of this investigation. For our study, we use the Potsdam version of the Upper Atmosphere Model (UAM-P). The UAM is a first-principle, time-dependent, and fully self-consistent numerical global model. The model includes the thermosphere, ionosphere, plasmasphere, and inner magnetosphere as well as the electrodynamics of the coupled MIT system for the altitudinal range from 80 (60) km up to the 15 Earth radii. The UAM-P differs from the UAM by a new electric field block. For this study, the lower latitudinal and equatorial electrodynamics of the UAM-P model was improved.
The calculation of the ionospheric current system’s contribution to the Earth’s magnetic field is the second aim of this study. We present the method, which allows computing the additional magnetic field inside and outside the current layer as generated by the space current density distribution using the Biot-Savart law. Additionally, we perform a comparison of the additional magnetic field calculation using 2D (equivalent currents) and 3D current distribution.
In the estimate of dispersion with the help of wavelet analysis considerable emphasis has been put on the extraction of the group velocity using the modulus of the wavelet transform. In this paper we give an asymptotic expression of the full propagator in wavelet space that comprises the phase velocity as well. This operator establishes a relationship between the observed signals at two different stations during wave propagation in a dispersive and attenuating medium. Numerical and experimental examples are presented to show that the method accurately models seismic wave dispersion and attenuation
Komplexe Systeme reichen von "harten", physikalischen, wie Klimaphysik, Turbulenz in Fluiden oder Plasmen bis zu so genannten "weichen", wie man sie in der Biologie, der Physik weicher Materie, Soziologie oder Ökonomie findet. Die Ausbildung von Verständnis zu einem solchen System beinhaltet eine Beschreibung in Form von Statistiken und schlussendlich mathematischen Gleichungen. Moderne Datenanalyse stellt eine große Menge von Werkzeugen zur Analyse von Komplexität auf verschiedenen Beschreibungsebenen bereit. In diesem Kurs werden statistische Methoden mit einem Schwerpunkt auf dynamischen Systemen diskutiert und eingeübt. Auf der methodischen Seite werden lineare und nichtlineare Ansätze behandelt, inklusive der Standard-Werkzeuge der deskriptiven und schlussfolgernden Statistik, Wavelet Analyse, Nichtparametrische Regression und der Schätzung nichtlinearer Maße wie fraktaler Dimensionen, Entropien und Komplexitätsmaßen. Auf der Modellierungsseite werden deterministische und stochastische Systeme, Chaos, Skalierung und das Entstehen von Komplexität durch Wechselwirkung diskutiert - sowohl für diskrete als auch für ausgedehnte Systeme. Die beiden Ansätze werden durch Systemanalyse jeweils passender Beispiele vereint.
Komplexe Systeme reichen von "harten", physikalischen, wie Klimaphysik, Turbulenz in Fluiden oder Plasmen bis zu so genannten "weichen", wie man sie in der Biologie, der Physik weicher Materie, Soziologie oder Ökonomie findet. Die Ausbildung von Verständnis zu einem solchen System beinhaltet eine Beschreibung in Form von Statistiken und schlussendlich mathematischen Gleichungen. Moderne Datenanalyse stellt eine große Menge von Werkzeugen zur Analyse von Komplexität auf verschiedenen Beschreibungsebenen bereit. In diesem Kurs werden statistische Methoden mit einem Schwerpunkt auf dynamischen Systemen diskutiert und eingeübt. Auf der methodischen Seite werden lineare und nichtlineare Ansätze behandelt, inklusive der Standard-Werkzeuge der deskriptiven und schlussfolgernden Statistik, Wavelet Analyse, Nichtparametrische Regression und der Schätzung nichtlinearer Maße wie fraktaler Dimensionen, Entropien und Komplexitätsmaßen. Auf der Modellierungsseite werden deterministische und stochastische Systeme, Chaos, Skalierung und das Entstehen von Komplexität durch Wechselwirkung diskutiert - sowohl für diskrete als auch für ausgedehnte Systeme. Die beiden Ansätze werden durch Systemanalyse jeweils passender Beispiele vereint.
Komplexe Systeme reichen von "harten", physikalischen, wie Klimaphysik, Turbulenz in Fluiden oder Plasmen bis zu so genannten "weichen", wie man sie in der Biologie, der Physik weicher Materie, Soziologie oder Ökonomie findet. Die Ausbildung von Verständnis zu einem solchen System beinhaltet eine Beschreibung in Form von Statistiken und schlussendlich mathematischen Gleichungen. Moderne Datenanalyse stellt eine große Menge von Werkzeugen zur Analyse von Komplexität auf verschiedenen Beschreibungsebenen bereit. In diesem Kurs werden statistische Methoden mit einem Schwerpunkt auf dynamischen Systemen diskutiert und eingeübt. Auf der methodischen Seite werden lineare und nichtlineare Ansätze behandelt, inklusive der Standard-Werkzeuge der deskriptiven und schlussfolgernden Statistik, Wavelet Analyse, Nichtparametrische Regression und der Schätzung nichtlinearer Maße wie fraktaler Dimensionen, Entropien und Komplexitätsmaßen. Auf der Modellierungsseite werden deterministische und stochastische Systeme, Chaos, Skalierung und das Entstehen von Komplexität durch Wechselwirkung diskutiert - sowohl für diskrete als auch für ausgedehnte Systeme. Die beiden Ansätze werden durch Systemanalyse jeweils passender Beispiele vereint.
Komplexe Systeme reichen von "harten", physikalischen, wie Klimaphysik, Turbulenz in Fluiden oder Plasmen bis zu so genannten "weichen", wie man sie in der Biologie, der Physik weicher Materie, Soziologie oder Ökonomie findet. Die Ausbildung von Verständnis zu einem solchen System beinhaltet eine Beschreibung in Form von Statistiken und schlussendlich mathematischen Gleichungen. Moderne Datenanalyse stellt eine große Menge von Werkzeugen zur Analyse von Komplexität auf verschiedenen Beschreibungsebenen bereit. In diesem Kurs werden statistische Methoden mit einem Schwerpunkt auf dynamischen Systemen diskutiert und eingeübt. Auf der methodischen Seite werden lineare und nichtlineare Ansätze behandelt, inklusive der Standard-Werkzeuge der deskriptiven und schlussfolgernden Statistik, Wavelet Analyse, Nichtparametrische Regression und der Schätzung nichtlinearer Maße wie fraktaler Dimensionen, Entropien und Komplexitätsmaßen. Auf der Modellierungsseite werden deterministische und stochastische Systeme, Chaos, Skalierung und das Entstehen von Komplexität durch Wechselwirkung diskutiert - sowohl für diskrete als auch für ausgedehnte Systeme. Die beiden Ansätze werden durch Systemanalyse jeweils passender Beispiele vereint.
Komplexe Systeme reichen von "harten", physikalischen, wie Klimaphysik, Turbulenz in Fluiden oder Plasmen bis zu so genannten "weichen", wie man sie in der Biologie, der Physik weicher Materie, Soziologie oder Ökonomie findet. Die Ausbildung von Verständnis zu einem solchen System beinhaltet eine Beschreibung in Form von Statistiken und schlussendlich mathematischen Gleichungen. Moderne Datenanalyse stellt eine große Menge von Werkzeugen zur Analyse von Komplexität auf verschiedenen Beschreibungsebenen bereit. In diesem Kurs werden statistische Methoden mit einem Schwerpunkt auf dynamischen Systemen diskutiert und eingeübt. Auf der methodischen Seite werden lineare und nichtlineare Ansätze behandelt, inklusive der Standard-Werkzeuge der deskriptiven und schlussfolgernden Statistik, Wavelet Analyse, Nichtparametrische Regression und der Schätzung nichtlinearer Maße wie fraktaler Dimensionen, Entropien und Komplexitätsmaßen. Auf der Modellierungsseite werden deterministische und stochastische Systeme, Chaos, Skalierung und das Entstehen von Komplexität durch Wechselwirkung diskutiert - sowohl für diskrete als auch für ausgedehnte Systeme. Die beiden Ansätze werden durch Systemanalyse jeweils passender Beispiele vereint.
Komplexe Systeme reichen von "harten", physikalischen, wie Klimaphysik, Turbulenz in Fluiden oder Plasmen bis zu so genannten "weichen", wie man sie in der Biologie, der Physik weicher Materie, Soziologie oder Ökonomie findet. Die Ausbildung von Verständnis zu einem solchen System beinhaltet eine Beschreibung in Form von Statistiken und schlussendlich mathematischen Gleichungen. Moderne Datenanalyse stellt eine große Menge von Werkzeugen zur Analyse von Komplexität auf verschiedenen Beschreibungsebenen bereit. In diesem Kurs werden statistische Methoden mit einem Schwerpunkt auf dynamischen Systemen diskutiert und eingeübt. Auf der methodischen Seite werden lineare und nichtlineare Ansätze behandelt, inklusive der Standard-Werkzeuge der deskriptiven und schlussfolgernden Statistik, Wavelet Analyse, Nichtparametrische Regression und der Schätzung nichtlinearer Maße wie fraktaler Dimensionen, Entropien und Komplexitätsmaßen. Auf der Modellierungsseite werden deterministische und stochastische Systeme, Chaos, Skalierung und das Entstehen von Komplexität durch Wechselwirkung diskutiert - sowohl für diskrete als auch für ausgedehnte Systeme. Die beiden Ansätze werden durch Systemanalyse jeweils passender Beispiele vereint.
Komplexe Systeme reichen von "harten", physikalischen, wie Klimaphysik, Turbulenz in Fluiden oder Plasmen bis zu so genannten "weichen", wie man sie in der Biologie, der Physik weicher Materie, Soziologie oder Ökonomie findet. Die Ausbildung von Verständnis zu einem solchen System beinhaltet eine Beschreibung in Form von Statistiken und schlussendlich mathematischen Gleichungen. Moderne Datenanalyse stellt eine große Menge von Werkzeugen zur Analyse von Komplexität auf verschiedenen Beschreibungsebenen bereit. In diesem Kurs werden statistische Methoden mit einem Schwerpunkt auf dynamischen Systemen diskutiert und eingeübt. Auf der methodischen Seite werden lineare und nichtlineare Ansätze behandelt, inklusive der Standard-Werkzeuge der deskriptiven und schlussfolgernden Statistik, Wavelet Analyse, Nichtparametrische Regression und der Schätzung nichtlinearer Maße wie fraktaler Dimensionen, Entropien und Komplexitätsmaßen. Auf der Modellierungsseite werden deterministische und stochastische Systeme, Chaos, Skalierung und das Entstehen von Komplexität durch Wechselwirkung diskutiert - sowohl für diskrete als auch für ausgedehnte Systeme. Die beiden Ansätze werden durch Systemanalyse jeweils passender Beispiele vereint.
Komplexe Systeme reichen von "harten", physikalischen, wie Klimaphysik, Turbulenz in Fluiden oder Plasmen bis zu so genannten "weichen", wie man sie in der Biologie, der Physik weicher Materie, Soziologie oder Ökonomie findet. Die Ausbildung von Verständnis zu einem solchen System beinhaltet eine Beschreibung in Form von Statistiken und schlussendlich mathematischen Gleichungen. Moderne Datenanalyse stellt eine große Menge von Werkzeugen zur Analyse von Komplexität auf verschiedenen Beschreibungsebenen bereit. In diesem Kurs werden statistische Methoden mit einem Schwerpunkt auf dynamischen Systemen diskutiert und eingeübt. Auf der methodischen Seite werden lineare und nichtlineare Ansätze behandelt, inklusive der Standard-Werkzeuge der deskriptiven und schlussfolgernden Statistik, Wavelet Analyse, Nichtparametrische Regression und der Schätzung nichtlinearer Maße wie fraktaler Dimensionen, Entropien und Komplexitätsmaßen. Auf der Modellierungsseite werden deterministische und stochastische Systeme, Chaos, Skalierung und das Entstehen von Komplexität durch Wechselwirkung diskutiert - sowohl für diskrete als auch für ausgedehnte Systeme. Die beiden Ansätze werden durch Systemanalyse jeweils passender Beispiele vereint.
Komplexe Systeme reichen von "harten", physikalischen, wie Klimaphysik, Turbulenz in Fluiden oder Plasmen bis zu so genannten "weichen", wie man sie in der Biologie, der Physik weicher Materie, Soziologie oder Ökonomie findet. Die Ausbildung von Verständnis zu einem solchen System beinhaltet eine Beschreibung in Form von Statistiken und schlussendlich mathematischen Gleichungen. Moderne Datenanalyse stellt eine große Menge von Werkzeugen zur Analyse von Komplexität auf verschiedenen Beschreibungsebenen bereit. In diesem Kurs werden statistische Methoden mit einem Schwerpunkt auf dynamischen Systemen diskutiert und eingeübt. Auf der methodischen Seite werden lineare und nichtlineare Ansätze behandelt, inklusive der Standard-Werkzeuge der deskriptiven und schlussfolgernden Statistik, Wavelet Analyse, Nichtparametrische Regression und der Schätzung nichtlinearer Maße wie fraktaler Dimensionen, Entropien und Komplexitätsmaßen. Auf der Modellierungsseite werden deterministische und stochastische Systeme, Chaos, Skalierung und das Entstehen von Komplexität durch Wechselwirkung diskutiert - sowohl für diskrete als auch für ausgedehnte Systeme. Die beiden Ansätze werden durch Systemanalyse jeweils passender Beispiele vereint.
Komplexe Systeme reichen von "harten", physikalischen, wie Klimaphysik, Turbulenz in Fluiden oder Plasmen bis zu so genannten "weichen", wie man sie in der Biologie, der Physik weicher Materie, Soziologie oder Ökonomie findet. Die Ausbildung von Verständnis zu einem solchen System beinhaltet eine Beschreibung in Form von Statistiken und schlussendlich mathematischen Gleichungen. Moderne Datenanalyse stellt eine große Menge von Werkzeugen zur Analyse von Komplexität auf verschiedenen Beschreibungsebenen bereit. In diesem Kurs werden statistische Methoden mit einem Schwerpunkt auf dynamischen Systemen diskutiert und eingeübt. Auf der methodischen Seite werden lineare und nichtlineare Ansätze behandelt, inklusive der Standard-Werkzeuge der deskriptiven und schlussfolgernden Statistik, Wavelet Analyse, Nichtparametrische Regression und der Schätzung nichtlinearer Maße wie fraktaler Dimensionen, Entropien und Komplexitätsmaßen. Auf der Modellierungsseite werden deterministische und stochastische Systeme, Chaos, Skalierung und das Entstehen von Komplexität durch Wechselwirkung diskutiert - sowohl für diskrete als auch für ausgedehnte Systeme. Die beiden Ansätze werden durch Systemanalyse jeweils passender Beispiele vereint.
Earthquake rates are driven by tectonic stress buildup, earthquake-induced stress changes, and transient aseismic processes. Although the origin of the first two sources is known, transient aseismic processes are more difficult to detect. However, the knowledge of the associated changes of the earthquake activity is of great interest, because it might help identify natural aseismic deformation patterns such as slow-slip events, as well as the occurrence of induced seismicity related to human activities. For this goal, we develop a Bayesian approach to identify change-points in seismicity data automatically. Using the Bayes factor, we select a suitable model, estimate possible change-points, and we additionally use a likelihood ratio test to calculate the significance of the change of the intensity. The approach is extended to spatiotemporal data to detect the area in which the changes occur. The method is first applied to synthetic data showing its capability to detect real change-points. Finally, we apply this approach to observational data from Oklahoma and observe statistical significant changes of seismicity in space and time.
Multivariate analyses of fixation durations in reading with linear mixed and additive mixed models
(2012)
[ 1] In this paper, we discuss the origin of superswell volcanism on the basis of representation and analysis of recent gravity and magnetic satellite data with wavelets in spherical geometry. We computed a refined gravity field in the south central Pacific based on the GRACE satellite GGM02S global gravity field and the KMS02 altimetric grid, and a magnetic anomaly field based on CHAMP data. The magnetic anomalies are marked by the magnetic lineation of the seafloor spreading and by a strong anomaly in the Tuamotu region, which we interpret as evidence for crustal thickening. We interpret our gravity field through a continuous wavelet analysis that allows to get a first idea of the internal density distribution. We also compute the continuous wavelet analysis of the bathymetric contribution to discriminate between deep and superficial sources. According to the gravity signature of the different chains as revealed by our analysis, various processes are at the origin of the volcanism in French Polynesia. As evidence, we show a large-scale anomaly over the Society Islands that we interpret as the gravity signature of a deeply anchored mantle plume. The gravity signature of the Cook-Austral chain indicates a complex origin which may involve deep processes. Finally, we discuss the particular location of the Marquesas chain as suggesting that the origin of the volcanism may interfere with secondary convection rolls or may be controlled by lithospheric weakness due to the regional stress field, or else related to the presence of the nearby Tuamotu plateau.
From monthly mean observatory data spanning 1957-2014, geomagnetic field secular variation values were calculated by annual differences. Estimates of the spherical harmonic Gauss coefficients of the core field secular variation were then derived by applying a correlation based modelling. Finally, a Fourier transform was applied to the time series of the Gauss coefficients. This process led to reliable temporal spectra of the Gauss coefficients up to spherical harmonic degree 5 or 6, and down to periods as short as 1 or 2 years depending on the coefficient. We observed that a k(-2) slope, where k is the frequency, is an acceptable approximation for these spectra, with possibly an exception for the dipole field. The monthly estimates of the core field secular variation at the observatory sites also show that large and rapid variations of the latter happen. This is an indication that geomagnetic jerks are frequent phenomena and that significant secular variation signals at short time scales - i.e. less than 2 years, could still be extracted from data to reveal an unexplored part of the core dynamics.