Refine
Year of publication
Document Type
- Article (74)
- Monograph/Edited Volume (10)
- Other (3)
- Postprint (3)
- Conference Proceeding (1)
Is part of the Bibliography
- yes (91)
Keywords
- Geomagnetic field (3)
- Wavelet transform (3)
- Geopotential theory (2)
- Kalman filter (2)
- Probabilistic forecasting (2)
- Satellite geodesy (2)
- geomagnetic field (2)
- geomagnetic storm (2)
- magnetosphere (2)
- multiscale analysis (2)
- spectral exponent (2)
- AFM (1)
- Assimilation (1)
- Bayesian inference (1)
- Bayesian inversion (1)
- Confidence interval (1)
- Core dynamics (1)
- Core field (1)
- Correlation based modelling (1)
- D. discoideum (1)
- Daily gravity field (1)
- DySEM (1)
- Dynamo: theories and simulations (1)
- Earthquake interaction (1)
- FIB patterning (1)
- Forecasting and prediction (1)
- Fractal (1)
- Full rank matrix filters (1)
- GRACE (1)
- Geomagnetic jerks (1)
- Geomagnetic storm (1)
- Geomagnetism (1)
- Gravity anomalies and Earth structure (1)
- Hadley-Walker Circulation (1)
- IGRF (1)
- ITG-Grace2010 (1)
- Interpolation (1)
- Inverse theory (1)
- Kalman smoother (1)
- Level of confidence (1)
- Lithology (1)
- Machine learning (1)
- Magnetic anomalies: modelling and interpretation (1)
- Magnetic field variations through time (1)
- Magnetosphere (1)
- Maximum magnitude of earthquake (1)
- Multichannel wavelets (1)
- Multigrid (1)
- Multiple time stepping (1)
- Multiscale analysis (1)
- ODP 659 (1)
- ODP 721/722 (1)
- ODP 967 (1)
- Pacific Ocean (1)
- Plio-Pleistocene (1)
- Quadrature mirror filters (1)
- Regularity analysis (1)
- Satellite magnetics (1)
- Secular variation (1)
- Secular variation rate of change (1)
- Simulation of Gaussian processes (1)
- Spectral exponent (1)
- Statistical seismology (1)
- Strike-slip fault model (1)
- Subdivision schemes (1)
- Vector subdivision schemes (1)
- Well log (1)
- actin dynamics (1)
- amoeboid motility (1)
- archaeomagnetism (1)
- assimilation (1)
- asteroseismology (1)
- cell migration (1)
- climate transition (1)
- core flow (1)
- geopotential theory (1)
- inverse problem (1)
- inverse theory (1)
- keratocytle-like motility (1)
- length of day (1)
- magnetic field variations through (1)
- migration (1)
- modal analysis (1)
- modes of (1)
- palaeomagnetism (1)
- potential fields (gravity, geomagnetism) (1)
- prediction (1)
- satellite data (1)
- secular variation (1)
- size reduction (1)
- spherical harmonics (1)
- stars: early-type (1)
- stars: individual: Vega (1)
- stars: oscillations (1)
- stars: rotation (1)
- starspots (1)
- statistical methods (1)
- structured cantilever (1)
- time (1)
- time series analysis (1)
Institute
The inverse problem of determining the flow at the Earth's core-mantle boundary according to an outer core magnetic field and secular variation model has been investigated through a Bayesian formalism. To circumvent the issue arising from the truncated nature of the available fields, we combined two modeling methods. In the first step, we applied a filter on the magnetic field to isolate its large scales by reducing the energy contained in its small scales, we then derived the dynamical equation, referred as filtered frozen flux equation, describing the spatiotemporal evolution of the filtered part of the field. In the second step, we proposed a statistical parametrization of the filtered magnetic field in order to account for both its remaining unresolved scales and its large-scale uncertainties. These two modeling techniques were then included in the Bayesian formulation of the inverse problem. To explore the complex posterior distribution of the velocity field resulting from this development, we numerically implemented an algorithm based on Markov chain Monte Carlo methods. After evaluating our approach on synthetic data and comparing it to previously introduced methods, we applied it to a magnetic field model derived from satellite data for the single epoch 2005.0. We could confirm the existence of specific features already observed in previous studies. In particular, we retrieved the planetary scale eccentric gyre characteristic of flow evaluated under the compressible quasi-geostrophy assumption although this hypothesis was not considered in our study. In addition, through the sampling of the velocity field posterior distribution, we could evaluate the reliability, at any spatial location and at any scale, of the flow we calculated. The flow uncertainties we determined are nevertheless conditioned by the choice of the prior constraints we applied to the velocity field.
Bayesian selection of Markov Models for symbol sequences application to microsaccadic eye movements
(2012)
Complex biological dynamics often generate sequences of discrete events which can be described as a Markov process. The order of the underlying Markovian stochastic process is fundamental for characterizing statistical dependencies within sequences. As an example for this class of biological systems, we investigate the Markov order of sequences of microsaccadic eye movements from human observers. We calculate the integrated likelihood of a given sequence for various orders of the Markov process and use this in a Bayesian framework for statistical inference on the Markov order. Our analysis shows that data from most participants are best explained by a first-order Markov process. This is compatible with recent findings of a statistical coupling of subsequent microsaccade orientations. Our method might prove to be useful for a broad class of biological systems.
The problem of estimating the maximum possible earthquake magnitude m(max) has attracted growing attention in recent years. Due to sparse data, the role of uncertainties becomes crucial. In this work, we determine the uncertainties related to the maximum magnitude in terms of confidence intervals. Using an earthquake catalog of Iran, m(max) is estimated for different predefined levels of confidence in six seismotectonic zones. Assuming the doubly truncated Gutenberg-Richter distribution as a statistical model for earthquake magnitudes, confidence intervals for the maximum possible magnitude of earthquakes are calculated in each zone. While the lower limit of the confidence interval is the magnitude of the maximum observed event, the upper limit is calculated from the catalog and the statistical model. For this aim, we use the original catalog which no declustering methods applied on as well as a declustered version of the catalog. Based on the study by Holschneider et al. (Bull Seismol Soc Am 101(4): 1649-1659, 2011), the confidence interval for m(max) is frequently unbounded, especially if high levels of confidence are required. In this case, no information is gained from the data. Therefore, we elaborate for which settings finite confidence levels are obtained. In this work, Iran is divided into six seismotectonic zones, namely Alborz, Azerbaijan, Zagros, Makran, Kopet Dagh, Central Iran. Although calculations of the confidence interval in Central Iran and Zagros seismotectonic zones are relatively acceptable for meaningful levels of confidence, results in Kopet Dagh, Alborz, Azerbaijan and Makran are not that much promising. The results indicate that estimating mmax from an earthquake catalog for reasonable levels of confidence alone is almost impossible.
In this paper, we propose a method of surface waves characterization based on the deformation of the wavelet transform of the analysed signal. An estimate of the phase velocity (the group velocity) and the attenuation coefficient is carried out using a model-based approach to determine the propagation operator in the wavelet domain, which depends nonlinearly on a set of unknown parameters. These parameters explicitly define the phase velocity, the group velocity and the attenuation. Under the assumption that the difference between waveforms observed at a couple of stations is solely due to the dispersion characteristics and the intrinsic attenuation of the medium, we then seek to find the set of unknown parameters of this model. Finding the model parameters turns out to be that of an optimization problem, which is solved through the minimization of an appropriately defined cost function. We show that, unlike time-frequency methods that exploit only the square modulus of the transform, we can achieve a complete characterization of surface waves in a dispersive and attenuating medium. Using both synthetic examples and experimental data, we also show that it is in principle possible to separate different modes in both the time domain and the frequency domain
Characterization of polarization attributes of seismic waves using continuous wavelet transforms
(2006)
Complex-trace analysis is the method of choice for analyzing polarized data. Because particle motion can be represented by instantaneous attributes that show distinct features for waves of different polarization characteristics, it can be used to separate and characterize these waves. Traditional methods of complex-trace analysis only give the instantaneous attributes as a function of time or frequency. However. for transient wave types or seismic events that overlap in time, an estimate of the polarization parameters requires analysis of the time-frequency dependence of these attributes. We propose a method to map instantaneous polarization attributes of seismic signals in the wavelet domain and explicitly relate these attributes with the wavelet-transform coefficients of the analyzed signal. We compare our method with traditional complex-trace analysis using numerical examples. An advantage of our method is its possibility of performing the complete wave-mode separation/ filtering process in the wavelet domain and its ability to provide the frequency dependence of ellipticity, which contains important information on the subsurface structure. Furthermore, using 2-C synthetic and real seismic shot gathers, we show how to use the method to separate different wave types and identify zones of interfering wave modes
We describe an iterative method to combine seismicity forecasts. With this method, we produce the next generation of a starting forecast by incorporating predictive skill from one or more input forecasts. For a single iteration, we use the differential probability gain of an input forecast relative to the starting forecast. At each point in space and time, the rate in the next-generation forecast is the product of the starting rate and the local differential probability gain. The main advantage of this method is that it can produce high forecast rates using all types of numerical forecast models, even those that are not rate-based. Naturally, a limitation of this method is that the input forecast must have some information not already contained in the starting forecast. We illustrate this method using the Every Earthquake a Precursor According to Scale (EEPAS) and Early Aftershocks Statistics (EAST) models, which are currently being evaluated at the US testing center of the Collaboratory for the Study of Earthquake Predictability. During a testing period from July 2009 to December 2011 (with 19 target earthquakes), the combined model we produce has better predictive performance - in terms of Molchan diagrams and likelihood - than the starting model (EEPAS) and the input model (EAST). Many of the target earthquakes occur in regions where the combined model has high forecast rates. Most importantly, the rates in these regions are substantially higher than if we had simply averaged the models.
constraints
(2016)
Prior information in ill-posed inverse problem is of critical importance because it is conditioning the posterior solution and its associated variability. The problem of determining the flow evolving at the Earth's core-mantle boundary through magnetic field models derived from satellite or observatory data is no exception to the rule. This study aims to estimate what information can be extracted on the velocity field at the core-mantle boundary, when the frozen flux equation is inverted under very weakly informative, but realistic, prior constraints. Instead of imposing a converging spectrum to the flow, we simply assume that its poloidal and toroidal energy spectra are characterized by power laws. The parameters of the spectra, namely, their magnitudes, and slopes are unknown. The connection between the velocity field, its spectra parameters, and the magnetic field model is established through the Bayesian formulation of the problem. Working in two steps, we determined the time-averaged spectra of the flow within the 2001–2009.5 period, as well as the flow itself and its associated uncertainties in 2005.0. According to the spectra we obtained, we can conclude that the large-scale approximation of the velocity field is not an appropriate assumption within the time window we considered. For the flow itself, we show that although it is dominated by its equatorial symmetric component, it is very unlikely to be perfectly symmetric. We also demonstrate that its geostrophic state is questioned in different locations of the outer core.
For the time stationary global geomagnetic field, a new modelling concept is presented. A Bayesian non-parametric approach provides realistic location dependent uncertainty estimates. Modelling related variabilities are dealt with systematically by making little subjective apriori assumptions. Rather than parametrizing the model by Gauss coefficients, a functional analytic approach is applied. The geomagnetic potential is assumed a Gaussian process to describe a distribution over functions. Apriori correlations are given by an explicit kernel function with non-informative dipole contribution. A refined modelling strategy is proposed that accommodates non-linearities of archeomagnetic observables: First, a rough field estimate is obtained considering only sites that provide full field vector records. Subsequently, this estimate supports the linearization that incorporates the remaining incomplete records. The comparison of results for the archeomagnetic field over the past 1000 yr is in general agreement with previous models while improved model uncertainty estimates are provided.
In a previous study, a new snapshot modeling concept for the archeomagnetic field was introduced (Mauerberger et al., 2020, ). By assuming a Gaussian process for the geomagnetic potential, a correlation-based algorithm was presented, which incorporates a closed-form spatial correlation function. This work extends the suggested modeling strategy to the temporal domain. A space-time correlation kernel is constructed from the tensor product of the closed-form spatial correlation kernel with a squared exponential kernel in time. Dating uncertainties are incorporated into the modeling concept using a noisy input Gaussian process. All but one modeling hyperparameters are marginalized, to reduce their influence on the outcome and to translate their variability to the posterior variance. The resulting distribution incorporates uncertainties related to dating, measurement and modeling process. Results from application to archeomagnetic data show less variation in the dipole than comparable models, but are in general agreement with previous findings.