Refine
Year of publication
Document Type
- Article (75)
- Monograph/Edited Volume (10)
- Other (3)
- Postprint (3)
- Conference Proceeding (1)
Is part of the Bibliography
- yes (92)
Keywords
- Geomagnetic field (3)
- Wavelet transform (3)
- Bayesian inference (2)
- Geopotential theory (2)
- Kalman filter (2)
- Probabilistic forecasting (2)
- Satellite geodesy (2)
- geomagnetic field (2)
- geomagnetic storm (2)
- magnetosphere (2)
- multiscale analysis (2)
- spectral exponent (2)
- AFM (1)
- Assimilation (1)
- Bayesian inversion (1)
- Confidence interval (1)
- Core dynamics (1)
- Core field (1)
- Correlation based modelling (1)
- D. discoideum (1)
- Daily gravity field (1)
- Data augmentation (1)
- DySEM (1)
- Dynamo: theories and simulations (1)
- Earthquake interaction (1)
- Earthquake modeling (1)
- FIB patterning (1)
- Forecasting and prediction (1)
- Fractal (1)
- Full rank matrix filters (1)
- GRACE (1)
- Gaussian process (1)
- Geomagnetic jerks (1)
- Geomagnetic storm (1)
- Geomagnetism (1)
- Gravity anomalies and Earth structure (1)
- Hadley-Walker Circulation (1)
- Hawkes process (1)
- IGRF (1)
- ITG-Grace2010 (1)
- Interpolation (1)
- Inverse theory (1)
- Kalman smoother (1)
- Level of confidence (1)
- Lithology (1)
- Machine learning (1)
- Magnetic anomalies: modelling and interpretation (1)
- Magnetic field variations through time (1)
- Magnetosphere (1)
- Maximum magnitude of earthquake (1)
- Multichannel wavelets (1)
- Multigrid (1)
- Multiple time stepping (1)
- Multiscale analysis (1)
- ODP 659 (1)
- ODP 721/722 (1)
- ODP 967 (1)
- Pacific Ocean (1)
- Plio-Pleistocene (1)
- Quadrature mirror filters (1)
- Regularity analysis (1)
- Sampling (1)
- Satellite magnetics (1)
- Secular variation (1)
- Secular variation rate of change (1)
- Self-exciting point process (1)
- Simulation of Gaussian processes (1)
- Spatio-temporal ETAS model (1)
- Spectral exponent (1)
- Statistical seismology (1)
- Strike-slip fault model (1)
- Subdivision schemes (1)
- Vector subdivision schemes (1)
- Well log (1)
- actin dynamics (1)
- amoeboid motility (1)
- archaeomagnetism (1)
- assimilation (1)
- asteroseismology (1)
- cell migration (1)
- climate transition (1)
- core flow (1)
- geopotential theory (1)
- inverse problem (1)
- inverse theory (1)
- keratocytle-like motility (1)
- length of day (1)
- magnetic field variations through (1)
- migration (1)
- modal analysis (1)
- modes of (1)
- palaeomagnetism (1)
- potential fields (gravity, geomagnetism) (1)
- prediction (1)
- satellite data (1)
- secular variation (1)
- size reduction (1)
- spherical harmonics (1)
- stars: early-type (1)
- stars: individual: Vega (1)
- stars: oscillations (1)
- stars: rotation (1)
- starspots (1)
- statistical methods (1)
- structured cantilever (1)
- time (1)
- time series analysis (1)
Institute
The dynamics of external contributions to the geomagnetic field is investigated by applying time-frequency methods to magnetic observatory data. Fractal models and multiscale analysis enable obtaining maximum quantitative information related to the short-term dynamics of the geomagnetic field activity. The stochastic properties of the horizontal component of the transient external field are determined by searching for scaling laws in the power spectra. The spectrum fits a power law with a scaling exponent beta, a typical characteristic of self-affine time-series. Local variations in the power-law exponent are investigated by applying wavelet analysis to the same time-series. These analyses highlight the self-affine properties of geomagnetic perturbations and their persistence. Moreover, they show that the main phases of sudden storm disturbances are uniquely characterized by a scaling exponent varying between 1 and 3, possibly related to the energy contained in the external field. These new findings suggest the existence of a long-range dependence, the scaling exponent being an efficient indicator of geomagnetic activity and singularity detection. These results show that by using magnetogram regularity to reflect the magnetosphere activity, a theoretical analysis of the external geomagnetic field based on local power-law exponents is possible.
The dynamics of external contributions to the geomagnetic field is investigated by applying time-frequency methods to magnetic observatory data. Fractal models and multiscale analysis enable obtaining maximum quantitative information related to the short-term dynamics of the geomagnetic field activity. The stochastic properties of the horizontal component of the transient external field are determined by searching for scaling laws in the power spectra. The spectrum fits a power law with a scaling exponent β, a typical characteristic of self-affine time-series. Local variations in the power-law exponent are investigated by applying wavelet analysis to the same time-series. These analyses highlight the self-affine properties of geomagnetic perturbations and their persistence. Moreover, they show that the main phases of sudden storm disturbances are uniquely characterized by a scaling exponent varying between 1 and 3, possibly related to the energy contained in the external field. These new findings suggest the existence of a long-range dependence, the scaling exponent being an efficient indicator of geomagnetic activity and singularity detection. These results show that by using magnetogram regularity to reflect the magnetosphere activity, a theoretical analysis of the external geomagnetic field based on local power-law exponents is possible.
In the eighties, the analysis of satellite altimetry data leads to the major discovery of gravity lineations in the oceans, with wavelengths between 200 and 1400 km. While the existence of the 200 km scale undulations is widely accepted, undulations at scales larger than 400 km are still a matter of debate. In this paper, we revisit the topic of the large-scale geoid undulations over the oceans in the light of the satellite gravity data provided by the GRACE mission, considerably more precise than the altimetry data at wavelengths larger than 400 km. First, we develop a dedicated method of directional Poisson wavelet analysis on the sphere with significance testing, in order to detect and characterize directional structures in geophysical data on the sphere at different spatial scales. This method is particularly well suited for potential field analysis. We validate it on a series of synthetic tests, and then apply it to analyze recent gravity models, as well as a bathymetry data set independent from gravity. Our analysis confirms the existence of gravity undulations at large scale in the oceans, with characteristic scales between 600 and 2000 km. Their direction correlates well with present-day plate motion over the Pacific ocean, where they are particularly clear, and associated with a conjugate direction at 1500 km scale. A major finding is that the 2000 km scale geoid undulations dominate and had never been so clearly observed previously. This is due to the great precision of GRACE data at those wavelengths. Given the large scale of these undulations, they are most likely related to mantle processes. Taking into account observations and models from other geophysical information, as seismological tomography, convection and geochemical models and electrical conductivity in the mantle, we conceive that all these inputs indicate a directional fabric of the mantle flows at depth, reflecting how the history of subduction influences the organization of lower mantle upwellings.
Wavelet modelling of the gravity field by domain decomposition methods: an example over Japan
(2011)
With the advent of satellite gravity, large gravity data sets of unprecedented quality at low and medium resolution become available. For local, high resolution field modelling, they need to be combined with the surface gravity data. Such models are then used for various applications, from the study of the Earth interior to the determination of oceanic currents. Here we show how to realize such a combination in a flexible way using spherical wavelets and applying a domain decomposition approach. This iterative method, based on the Schwarz algorithms, allows to split a large problem into smaller ones, and avoids the calculation of the entire normal system, which may be huge if high resolution is sought over wide areas. A subdomain is defined as the harmonic space spanned by a subset of the wavelet family. Based on the localization properties of the wavelets in space and frequency, we define hierarchical subdomains of wavelets at different scales. On each scale, blocks of subdomains are defined by using a tailored spatial splitting of the area. The data weighting and regularization are iteratively adjusted for the subdomains, which allows to handle heterogeneity in the data quality or the gravity variations. Different levels of approximations of the subdomains normals are also introduced, corresponding to building local averages of the data at different resolution levels.
We first provide the theoretical background on domain decomposition methods. Then, we validate the method with synthetic data, considering two kinds of noise: white noise and coloured noise. We then apply the method to data over Japan, where we combine a satellite-based geopotential model, EIGEN-GL04S, and a local gravity model from a combination of land and marine gravity data and an altimetry-derived marine gravity model. A hybrid spherical harmonics/wavelet model of the geoid is obtained at about 15 km resolution and a corrector grid for the surface model is derived.
Potential fields are classically represented on the sphere using spherical harmonics. However, this decomposition leads to numerical difficulties when data to be modelled are irregularly distributed or cover a regional zone. To overcome this drawback, we develop a new representation of the magnetic and the gravity fields based on wavelet frames. In this paper, we first describe how to build wavelet frames on the sphere. The chosen frames are based on the Poisson multipole wavelets, which are of special interest for geophysical modelling, since their scaling parameter is linked to the multipole depth (Holschneider et al.). The implementation of wavelet frames results from a discretization of the continuous wavelet transform in space and scale. We also build different frames using two kinds of spherical meshes and various scale sequences. We then validate the mathematical method through simple fits of scalar functions on the sphere, named 'scalar models'. Moreover, we propose magnetic and gravity models, referred to as 'vectorial models', taking into account geophysical constraints. We then discuss the representation of the Earth's magnetic and gravity fields from data regularly or irregularly distributed. Comparisons of the obtained wavelet models with the initial spherical harmonic models point out the advantages of wavelet modelling when the used magnetic or gravity data are sparsely distributed or cover just a very local zone
This paper is devoted to the digital processing of multicomponent seismograms using wavelet analysis. The goal of this processing is to identify Rayleigh surface elastic waves and determine their properties. A new method for calculating the ellipticity parameters of a wave in the form of a time-frequency spectrum is proposed, which offers wide possibilities for filtering seismic signals in order to suppress or extract the Rayleigh components. A model of dispersion and dissipation of elliptic waves written in terms of wavelet spectra of complex (two-component) signals is also proposed. The model is used to formulate a nonlinear minimization problem that allows for a high-accuracy calculation of the group and phase velocities and the attenuation factor for a propagating elliptic Rayleigh wave. All methods considered in the paper are illustrated with the use of test signals. (c) 2005 Pleiades Publishing, Inc
Standing stocks are typically easier to measure than process rates such as production. Hence, stocks are often used as indicators of ecosystem functions although the latter are generally more strongly related to rates than to stocks. The regulation of stocks and rates and thus their variability over time may differ, as stocks constitute the net result of production and losses. Based on long-term high frequency measurements in a large, deep lake we explore the variability patterns in primary and bacterial production and relate them to those of the corresponding standing stocks, i.e. chlorophyll concentration, phytoplankton and bacterial biomass. We employ different methods (coefficient of variation, spline fitting and spectral analysis) which complement each other for assessing the variability present in the plankton data, at different temporal scales. In phytoplankton, we found that the overall variability of primary production is dominated by fluctuations at low frequencies, such as the annual, whereas in stocks and chlorophyll in particular, higher frequencies contribute substantially to the overall variance. This suggests that using standing stocks instead of rate measures leads to an under- or overestimation of food shortage for consumers during distinct periods of the year. The range of annual variation in bacterial production is 8 times greater than biomass, showing that the variability of bacterial activity (e.g. oxygen consumption, remineralisation) would be underestimated if biomass is used. The P/B ratios were variable and although clear trends are present in both bacteria and phytoplankton, no systematic relationship between stock and rate measures were found for the two groups. Hence, standing stock and process rate measures exhibit different variability patterns and care is needed when interpreting the mechanisms and implications of the variability encountered.
The magnetosphere-ionosphere-thermosphere (MIT) dynamic system significantly depends on the highly variable solar wind conditions, in particular, on changes of the strength and orientation of the interplanetary magnetic field (IMF). The solar wind and IMF interactions with the magnetosphere drive the MIT system via the magnetospheric field-aligned currents (FACs). The global modeling helps us to understand the physical background of this complex system. With the present study, we test the recently developed high-resolution empirical model of field-aligned currents MFACE (a high-resolution Model of Field-Aligned Currents through Empirical orthogonal functions analysis). These FAC distributions were used as input of the time-dependent, fully self-consistent global Upper Atmosphere Model (UAM) for different seasons and various solar wind and IMF conditions. The modeling results for neutral mass density and thermospheric wind are directly compared with the CHAMP satellite measurements. In addition, we perform comparisons with the global empirical models: the thermospheric wind model (HWM07) and the atmosphere density model (Naval Research Laboratory Mass Spectrometer and Incoherent Scatter Extended 2000). The theoretical model shows a good agreement with the satellite observations and an improved behavior compared with the empirical models at high latitudes. Using the MFACE model as input parameter of the UAM model, we obtain a realistic distribution of the upper atmosphere parameters for the Northern and Southern Hemispheres during stable IMF orientation as well as during dynamic situations. This variant of the UAM can therefore be used for modeling the MIT system and space weather predictions.
We investigate the influence of spatial heterogeneities on various aspects of brittle failure and seismicity in a model of a large strike-slip fault. The model dynamics is governed by realistic boundary conditions consisting of constant velocity motion of regions around the fault, static/kinetic friction laws, creep with depth-dependent coefficients, and 3-D elastic stress transfer. The dynamic rupture is approximated on a continuous time scale using a finite stress propagation velocity ("quasidynamic model''). The model produces a "brittle- ductile'' transition at a depth of about 12.5 km, realistic hypocenter distributions, and other features of seismicity compatible with observations. Previous work suggested that the range of size scales in the distribution of strength-stress heterogeneities acts as a tuning parameter of the dynamics. Here we test this hypothesis by performing a systematic parameter-space study with different forms of heterogeneities. In particular, we analyze spatial heterogeneities that can be tuned by a single parameter in two distributions: ( 1) high stress drop barriers in near- vertical directions and ( 2) spatial heterogeneities with fractal properties and variable fractal dimension. The results indicate that the first form of heterogeneities provides an effective means of tuning the behavior while the second does not. In relatively homogeneous cases, the fault self-organizes to large-scale patches and big events are associated with inward failure of individual patches and sequential failures of different patches. The frequency-size event statistics in such cases are compatible with the characteristic earthquake distribution and large events are quasi-periodic in time. In strongly heterogeneous or near-critical cases, the rupture histories are highly discontinuous and consist of complex migration patterns of slip on the fault. In such cases, the frequency-size and temporal statistics follow approximately power-law relations
The Groningen gas field serves as a natural laboratory for production-induced earthquakes, because no earthquakes were observed before the beginning of gas production. Increasing gas production rates resulted in growing earthquake activity and eventually in the occurrence of the 2012M(w) 3.6 Huizinge earthquake. At least since this event, a detailed seismic hazard and risk assessment including estimation of the maximum earthquake magnitude is considered to be necessary to decide on the future gas production. In this short note, we first apply state-of-the-art methods of mathematical statistics to derive confidence intervals for the maximum possible earthquake magnitude m(max). Second, we calculate the maximum expected magnitude M-T in the time between 2016 and 2024 for three assumed gas-production scenarios. Using broadly accepted physical assumptions and 90% confidence level, we suggest a value of m(max) 4.4, whereas M-T varies between 3.9 and 4.3, depending on the production scenario.
We show how the maximum magnitude within a predefined future time horizon may be estimated from an earthquake catalog within the context of Gutenberg-Richter statistics. The aim is to carry out a rigorous uncertainty assessment, and calculate precise confidence intervals based on an imposed level of confidence a. In detail, we present a model for the estimation of the maximum magnitude to occur in a time interval T-f in the future, given a complete earthquake catalog for a time period T in the past and, if available, paleoseismic events. For this goal, we solely assume that earthquakes follow a stationary Poisson process in time with unknown productivity Lambda and obey the Gutenberg-Richter law in magnitude domain with unknown b-value. The random variables. and b are estimated by means of Bayes theorem with noninformative prior distributions. Results based on synthetic catalogs and on retrospective calculations of historic catalogs from the highly active area of Japan and the low-seismicity, but high-risk region lower Rhine embayment (LRE) in Germany indicate that the estimated magnitudes are close to the true values. Finally, we discuss whether the techniques can be extended to meet the safety requirements for critical facilities such as nuclear power plants. For this aim, the maximum magnitude for all times has to be considered. In agreement with earlier work, we find that this parameter is not a useful quantity from the viewpoint of statistical inference.
Earthquake catalogs are probably the most informative data source about spatiotemporal seismicity evolution. The catalog quality in one of the most active seismogenic zones in the world, Japan, is excellent, although changes in quality arising, for example, from an evolving network are clearly present. Here, we seek the best estimate for the largest expected earthquake in a given future time interval from a combination of historic and instrumental earthquake catalogs. We extend the technique introduced by Zoller et al. (2013) to estimate the maximum magnitude in a time window of length T-f for earthquake catalogs with varying level of completeness. In particular, we consider the case in which two types of catalogs are available: a historic catalog and an instrumental catalog. This leads to competing interests with respect to the estimation of the two parameters from the Gutenberg-Richter law, the b-value and the event rate lambda above a given lower-magnitude threshold (the a-value). The b-value is estimated most precisely from the frequently occurring small earthquakes; however, the tendency of small events to cluster in aftershocks, swarms, etc. violates the assumption of a Poisson process that is used for the estimation of lambda. We suggest addressing conflict by estimating b solely from instrumental seismicity and using large magnitude events from historic catalogs for the earthquake rate estimation. Applying the method to Japan, there is a probability of about 20% that the maximum expected magnitude during any future time interval of length T-f = 30 years is m >= 9.0. Studies of different subregions in Japan indicates high probabilities for M 8 earthquakes along the Tohoku arc and relatively low probabilities in the Tokai, Tonankai, and Nankai region. Finally, for scenarios related to long-time horizons and high-confidence levels, the maximum expected magnitude will be around 10.
We present a new model of the geomagnetic field spanning the last 20 years and called Kalmag. Deriving from the assimilation of CHAMP and Swarm vector field measurements, it separates the different contributions to the observable field through parameterized prior covariance matrices. To make the inverse problem numerically feasible, it has been sequentialized in time through the combination of a Kalman filter and a smoothing algorithm. The model provides reliable estimates of past, present and future mean fields and associated uncertainties. The version presented here is an update of our IGRF candidates; the amount of assimilated data has been doubled and the considered time window has been extended from [2000.5, 2019.74] to [2000.5, 2020.33].
In the present study, we summarize and evaluate the endeavors from recent years to estimate the maximum possible earthquake magnitude m(max) from observed data. In particular, we use basic and physically motivated assumptions to identify best cases and worst cases in terms of lowest and highest degree of uncertainty of m(max). In a general framework, we demonstrate that earthquake data and earthquake proxy data recorded in a fault zone provide almost no information about m(max) unless reliable and homogeneous data of a long time interval, including several earthquakes with magnitude close to m(max), are available. Even if detailed earthquake information from some centuries including historic and paleoearthquakes are given, only very few, namely the largest events, will contribute at all to the estimation of m(max), and this results in unacceptably high uncertainties. As a consequence, estimators of m(max) in a fault zone, which are based solely on earthquake-related information from this region, have to be dismissed.
Both aftershocks and geodetically measured postseismic displacements are important markers of the stress relaxation process following large earthquakes. Postseismic displacements can be related to creep-like relaxation in the vicinity of the coseismic rupture by means of inversion methods. However, the results of slip inversions are typically non-unique and subject to large uncertainties. Therefore, we explore the possibility to improve inversions by mechanical constraints. In particular, we take into account the physical understanding that postseismic deformation is stress-driven, and occurs in the coseismically stressed zone. We do joint inversions for coseismic and postseismic slip in a Bayesian framework in the case of the 2004 M6.0 Parkfield earthquake. We perform a number of inversions with different constraints, and calculate their statistical significance. According to information criteria, the best result is preferably related to a physically reasonable model constrained by the stress-condition (namely postseismic creep is driven by coseismic stress) and the condition that coseismic slip and large aftershocks are disjunct. This model explains 97% of the coseismic displacements and 91% of the postseismic displacements during day 1-5 following the Parkfield event, respectively. It indicates that the major postseismic deformation can be generally explained by a stress relaxation process for the Parkfield case. This result also indicates that the data to constrain the coseismic slip model could be enriched postseismically. For the 2004 Parkfield event, we additionally observe asymmetric relaxation process at the two sides of the fault, which can be explained by material contrast ratio across the fault of similar to 1.15 in seismic velocity.
The motility of adherent eukaryotic cells is driven by the dynamics of the actin cytoskeleton. Despite the common force-generating actin machinery, different cell types often show diverse modes of locomotion that differ in their shape dynamics, speed, and persistence of motion. Recently, experiments in Dictyostelium discoideum have revealed that different motility modes can be induced in this model organism, depending on genetic modifications, developmental conditions, and synthetic changes of intracellular signaling. Here, we report experimental evidence that in a mutated D. discoideum cell line with increased Ras activity, switches between two distinct migratory modes, the amoeboid and fan-shaped type of locomotion, can even spontaneously occur within the same cell. We observed and characterized repeated and reversible switchings between the two modes of locomotion, suggesting that they are distinct behavioral traits that coexist within the same cell. We adapted an established phenomenological motility model that combines a reaction-diffusion system for the intracellular dynamics with a dynamic phase field to account for our experimental findings.
The Smoothing Spline ANOVA (SS-ANOVA) requires a specialized construction of basis and penalty terms in order to incorporate prior knowledge about the data to be fitted. Typically, one resorts to the most general approach using tensor product splines. This implies severe constraints on the correlation structure, i.e. the assumption of isotropy of smoothness can not be incorporated in general. This may increase the variance of the spline fit, especially if only a relatively small set of observations are given. In this article, we propose an alternative method that allows to incorporate prior knowledge without the need to construct specialized bases and penalties, allowing the researcher to choose the spline basis and penalty according to the prior knowledge of the observations rather than choosing them according to the analysis to be done. The two approaches are compared with an artificial example and with analyses of fixation durations during reading.
The Smoothing Spline ANOVA (SS-ANOVA) requires a specialized construction of basis and penalty terms in order to incorporate prior knowledge about the data to be fitted. Typically, one resorts to the most general approach using tensor product splines. This implies severe constraints on the correlation structure, i.e. the assumption of isotropy of smoothness can not be incorporated in general. This may increase the variance of the spline fit, especially if only a relatively small set of observations are given. In this article, we propose an alternative method that allows to incorporate prior knowledge without the need to construct specialized bases and penalties, allowing the researcher to choose the spline basis and penalty according to the prior knowledge of the observations rather than choosing them according to the analysis to be done. The two approaches are compared with an artificial example and with analyses of fixation durations during reading.
We present an alarm-based earthquake forecast model that uses the early aftershock statistics (EAST). This model is based on the hypothesis that the time delay before the onset of the power-law aftershock decay rate decreases as the level of stress and the seismogenic potential increase. Here, we estimate this time delay from < t(g)>, the time constant of the Omori-Utsu law. To isolate space-time regions with a relative high level of stress, the single local variable of our forecast model is the E-a value, the ratio between the long-term and short-term estimations of < t(g)>. When and where the E-a value exceeds a given threshold (i.e., the c value is abnormally small), an alarm is issued, and an earthquake is expected to occur during the next time step. Retrospective tests show that the EAST model has better predictive power than a stationary reference model based on smoothed extrapolation of past seismicity. The official prospective test for California started on 1 July 2009 in the testing center of the Collaboratory for the Study of Earthquake Predictability (CSEP). During the first nine months, 44 M >= 4 earthquakes occurred in the testing area. For this time period, the EAST model has better predictive power than the reference model at a 1% level of significance. Because the EAST model has also a better predictive power than several time-varying clustering models tested in CSEP at a 1% level of significance, we suggest that our successful prospective results are not due only to the space-time clustering of aftershocks.
We describe a new, original approach to the modelling of the Earth's magnetic field. The overall objective of this study is to reliably render fast variations of the core field and its secular variation. This method combines a sequential modelling approach, a Kalman filter, and a correlation-based modelling step. Sources that most significantly contribute to the field measured at the surface of the Earth are modelled. Their separation is based on strong prior information on their spatial and temporal behaviours. We obtain a time series of model distributions which display behaviours similar to those of recent models based on more classic approaches, particularly at large temporal and spatial scales. Interesting new features and periodicities are visible in our models at smaller time and spatial scales. An important aspect of our method is to yield reliable error bars for all model parameters. These errors, however, are only as reliable as the description of the different sources and the prior information used are realistic. Finally, we used a slightly different version of our method to produce candidate models for the thirteenth edition of the International Geomagnetic Reference Field.
High-precision observations of the present-day geomagnetic field by ground-based observatories and satellites provide unprecedented conditions for unveiling the dynamics of the Earth’s core. Combining geomagnetic observations with dynamo simulations in a data assimilation (DA) framework allows the reconstruction of past and present states of the internal core dynamics. The essential information that couples the internal state to the observations is provided by the statistical correlations from a numerical dynamo model in the form of a model covariance matrix. Here we test a sequential DA framework, working through a succession of forecast and analysis steps, that extracts the correlations from an ensemble of dynamo models. The primary correlations couple variables of the same azimuthal wave number, reflecting the predominant axial symmetry of the magnetic field. Synthetic tests show that the scheme becomes unstable when confronted with high-precision geomagnetic observations. Our study has identified spurious secondary correlations as the origin of the problem. Keeping only the primary correlations by localizing the covariance matrix with respect to the azimuthal wave number suffices to stabilize the assimilation. While the first analysis step is fundamental in constraining the large-scale interior state, further assimilation steps refine the smaller and more dynamical scales. This refinement turns out to be critical for long-term geomagnetic predictions. Increasing the assimilation steps from one to 18 roughly doubles the prediction horizon for the dipole from about tree to six centuries, and from 30 to about 60 yr for smaller observable scales. This improvement is also reflected on the predictability of surface intensity features such as the South Atlantic Anomaly. Intensity prediction errors are decreased roughly by a half when assimilating long observation sequences.
Borehole logs provide in situ information about the fluctuations of petrophysical properties with depth and thus allow the characterization of the crustal heterogeneities. A detailed investigation of these measurements may lead to extract features of the geological media. In this study, we suggest a regularity analysis based on the continuous wavelet transform to examine sonic logs data. The description of the local behavior of the logs at each depth is carried out using the local Hurst exponent estimated by two (02) approaches: the local wavelet approach and the average-local wavelet approach. Firstly, a synthetic log, generated using the random midpoints displacement algorithm, is processed by the regularity analysis. The obtained Hurst curves allowed the discernment of the different layers composing the simulated geological model. Next, this analysis is extended to real sonic logs data recorded at the Kontinentales Tiefbohrprogramm (KTB) pilot borehole (Continental Deep Drilling Program, Germany). The results show a significant correlation between the estimated Hurst exponents and the lithological discontinuities crossed by the well. Hence, the Hurst exponent can be used as a tool to characterize underground heterogeneities.
We present a Bayesian method that allows continuous updating the aperiodicity of the recurrence time distribution of large earthquakes based on a catalog with magnitudes above a completeness threshold. The approach uses a recently proposed renewal model for seismicity and allows the inclusion of magnitude uncertainties in a straightforward manner. Errors accounting for grouped magnitudes and random errors are studied and discussed. The results indicate that a stable and realistic value of the aperiodicity can be predicted in an early state of seismicity evolution, even though only a small number of large earthquakes has occurred to date. Furthermore, we demonstrate that magnitude uncertainties can drastically influence the results and can therefore not be neglected. We show how to correct for the bias caused by magnitude errors. For the region of Parkfield we find that the aperiodicity, or the coefficient of variation, is clearly higher than in studies which are solely based on the large earthquakes.
Amoebae explore their environment in a random way, unless external cues like, e. g., nutrients, bias their motion. Even in the absence of cues, however, experimental cell tracks show some degree of persistence. In this paper, we analyzed individual cell tracks in the framework of a linear mixed effects model, where each track is modeled by a fractional Brownian motion, i.e., a Gaussian process exhibiting a long-term correlation structure superposed on a linear trend. The degree of persistence was quantified by the Hurst exponent of fractional Brownian motion. Our analysis of experimental cell tracks of the amoeba Dictyostelium discoideum showed a persistent movement for the majority of tracks. Employing a sliding window approach, we estimated the variations of the Hurst exponent over time, which allowed us to identify points in time, where the correlation structure was distorted ("outliers"). Coarse graining of track data via down-sampling allowed us to identify the dependence of persistence on the spatial scale. While one would expect the (mode of the) Hurst exponent to be constant on different temporal scales due to the self-similarity property of fractional Brownian motion, we observed a trend towards stronger persistence for the down-sampled cell tracks indicating stronger persistence on larger time scales.
P>We present a statistical analysis of focal mechanism orientations for nine California fault zones with the goal of quantifying variations of fault zone heterogeneity at seismogenic depths. The focal mechanism data are generated from first motion polarities for earthquakes in the time period 1983-2004, magnitude range 0-5, and depth range 0-15 km. Only mechanisms with good quality solutions are used. We define fault zones using 20 km wide rectangles and use summations of normalized potency tensors to describe the distribution of double-couple orientations for each fault zone. Focal mechanism heterogeneity is quantified using two measures computed from the tensors that relate to the scatter in orientations and rotational asymmetry or skewness of the distribution. We illustrate the use of these quantities by showing relative differences in the focal mechanism heterogeneity characteristics for different fault zones. These differences are shown to relate to properties of the fault zone surface traces such that increased scatter correlates with fault trace complexity and rotational asymmetry correlates with the dominant fault trace azimuth. These correlations indicate a link between the long-term evolution of a fault zone over many earthquake cycles and its seismic behaviour over a 20 yr time period. Analysis of the partitioning of San Jacinto fault zone focal mechanisms into different faulting styles further indicates that heterogeneity is dominantly controlled by structural properties of the fault zone, rather than time or magnitude related properties of the seismicity.
Preface
(2018)
We use a dynamic scanning electron microscope (DySEM) to analyze the movement of oscillating micromechanical structures. A dynamic secondary electron (SE) signal is recorded and correlated to the oscillatory excitation of scanning force microscope (SFM) cantilever by means of lock-in amplifiers. We show, how the relative phase of the oscillations modulate the resulting real part and phase pictures of the DySEM mapping. This can be used to obtain information about the underlying oscillatory dynamics. We apply the theory to the case of a cantilever in oscillation, driven at different flexural and torsional resonance modes. This is an extension of a recent work (Schroter et al 2012 Nanotechnology 23 435501), where we reported on a general methodology to distinguish nonlinear features caused by the imaging process from those caused by cantilever motion.
In this paper we propose a procedure which allows the construction of a large family of FIR d x d matrix wavelet filters by exploiting the one-to-one correspondence between QMF systems and orthogonal operators which commute with the shifts by two. A characterization of the class of filters of full rank type that can be obtained with such procedure is given. In particular, we restrict our attention to a special construction based on the representation of SO(2d) in terms of the elements of its Lie algebra. Explicit expressions for the filters in the case d = 2 are given, as a result of a local analysis of the parameterization obtained from perturbing the Haar system.
Aftershocks rates seem to follow a power law decay, but the question of the aftershock frequency immediately after an earthquake remains open. We estimate an average aftershock decay rate within one day in southern California by stacking in time different sequences triggered by main shocks ranging in magnitude from 2.5 to 4.5. Then we estimate the time delay before the onset of the power law aftershock decay rate. For the last 20 years, we observe that this time delay suddenly increase after large earthquakes, and slowly decreases at a constant rate during periods of low seismicity. In a band-limited power law model such variations can be explained by different patterns of stress distribution at different stages of the seismic cycle. We conclude that, on regional length scales, the brittle upper crust exhibits a collective behavior reflecting to some extent the proximity of a threshold of fracturing
Seismic hazard evaluation is proposed by a methodological approach that allows the study of the influence of different modelling assumptions relative to the spatial and temporal distribution of earthquakes on the maximum values of expected intensities. In particular, we show that the estimated hazard at a fixed point is very sensitive to the assumed spatial distribution of epicentres and their estimators. As we will see, the usual approach, based on uniformly distributing the epicentres inside each seismogenic zone is likely to be biased towards lower expected intensity values. This will be made more precise later. Recall that the term "bias" means, that the expectation of the estimated quantity ( taken as a random variable on the space of statistics) is different from the expectation of the quantity itself. Instead, our approach, based on an estimator that takes into account the observed clustering of events is essentially unbiased, as shown by a Monte-Carlo simulation, and is configured on a 11011-isotropic macroseismic attenuation model which is independently estimated for each zone
From monthly mean observatory data spanning 1957-2014, geomagnetic field secular variation values were calculated by annual differences. Estimates of the spherical harmonic Gauss coefficients of the core field secular variation were then derived by applying a correlation based modelling. Finally, a Fourier transform was applied to the time series of the Gauss coefficients. This process led to reliable temporal spectra of the Gauss coefficients up to spherical harmonic degree 5 or 6, and down to periods as short as 1 or 2 years depending on the coefficient. We observed that a k(-2) slope, where k is the frequency, is an acceptable approximation for these spectra, with possibly an exception for the dipole field. The monthly estimates of the core field secular variation at the observatory sites also show that large and rapid variations of the latter happen. This is an indication that geomagnetic jerks are frequent phenomena and that significant secular variation signals at short time scales - i.e. less than 2 years, could still be extracted from data to reveal an unexplored part of the core dynamics.
[ 1] In this paper, we discuss the origin of superswell volcanism on the basis of representation and analysis of recent gravity and magnetic satellite data with wavelets in spherical geometry. We computed a refined gravity field in the south central Pacific based on the GRACE satellite GGM02S global gravity field and the KMS02 altimetric grid, and a magnetic anomaly field based on CHAMP data. The magnetic anomalies are marked by the magnetic lineation of the seafloor spreading and by a strong anomaly in the Tuamotu region, which we interpret as evidence for crustal thickening. We interpret our gravity field through a continuous wavelet analysis that allows to get a first idea of the internal density distribution. We also compute the continuous wavelet analysis of the bathymetric contribution to discriminate between deep and superficial sources. According to the gravity signature of the different chains as revealed by our analysis, various processes are at the origin of the volcanism in French Polynesia. As evidence, we show a large-scale anomaly over the Society Islands that we interpret as the gravity signature of a deeply anchored mantle plume. The gravity signature of the Cook-Austral chain indicates a complex origin which may involve deep processes. Finally, we discuss the particular location of the Marquesas chain as suggesting that the origin of the volcanism may interfere with secondary convection rolls or may be controlled by lithospheric weakness due to the regional stress field, or else related to the presence of the nearby Tuamotu plateau.
Multivariate analyses of fixation durations in reading with linear mixed and additive mixed models
(2012)
Earthquake rates are driven by tectonic stress buildup, earthquake-induced stress changes, and transient aseismic processes. Although the origin of the first two sources is known, transient aseismic processes are more difficult to detect. However, the knowledge of the associated changes of the earthquake activity is of great interest, because it might help identify natural aseismic deformation patterns such as slow-slip events, as well as the occurrence of induced seismicity related to human activities. For this goal, we develop a Bayesian approach to identify change-points in seismicity data automatically. Using the Bayes factor, we select a suitable model, estimate possible change-points, and we additionally use a likelihood ratio test to calculate the significance of the change of the intensity. The approach is extended to spatiotemporal data to detect the area in which the changes occur. The method is first applied to synthetic data showing its capability to detect real change-points. Finally, we apply this approach to observational data from Oklahoma and observe statistical significant changes of seismicity in space and time.
Komplexe Systeme reichen von "harten", physikalischen, wie Klimaphysik, Turbulenz in Fluiden oder Plasmen bis zu so genannten "weichen", wie man sie in der Biologie, der Physik weicher Materie, Soziologie oder Ökonomie findet. Die Ausbildung von Verständnis zu einem solchen System beinhaltet eine Beschreibung in Form von Statistiken und schlussendlich mathematischen Gleichungen. Moderne Datenanalyse stellt eine große Menge von Werkzeugen zur Analyse von Komplexität auf verschiedenen Beschreibungsebenen bereit. In diesem Kurs werden statistische Methoden mit einem Schwerpunkt auf dynamischen Systemen diskutiert und eingeübt. Auf der methodischen Seite werden lineare und nichtlineare Ansätze behandelt, inklusive der Standard-Werkzeuge der deskriptiven und schlussfolgernden Statistik, Wavelet Analyse, Nichtparametrische Regression und der Schätzung nichtlinearer Maße wie fraktaler Dimensionen, Entropien und Komplexitätsmaßen. Auf der Modellierungsseite werden deterministische und stochastische Systeme, Chaos, Skalierung und das Entstehen von Komplexität durch Wechselwirkung diskutiert - sowohl für diskrete als auch für ausgedehnte Systeme. Die beiden Ansätze werden durch Systemanalyse jeweils passender Beispiele vereint.
Komplexe Systeme reichen von "harten", physikalischen, wie Klimaphysik, Turbulenz in Fluiden oder Plasmen bis zu so genannten "weichen", wie man sie in der Biologie, der Physik weicher Materie, Soziologie oder Ökonomie findet. Die Ausbildung von Verständnis zu einem solchen System beinhaltet eine Beschreibung in Form von Statistiken und schlussendlich mathematischen Gleichungen. Moderne Datenanalyse stellt eine große Menge von Werkzeugen zur Analyse von Komplexität auf verschiedenen Beschreibungsebenen bereit. In diesem Kurs werden statistische Methoden mit einem Schwerpunkt auf dynamischen Systemen diskutiert und eingeübt. Auf der methodischen Seite werden lineare und nichtlineare Ansätze behandelt, inklusive der Standard-Werkzeuge der deskriptiven und schlussfolgernden Statistik, Wavelet Analyse, Nichtparametrische Regression und der Schätzung nichtlinearer Maße wie fraktaler Dimensionen, Entropien und Komplexitätsmaßen. Auf der Modellierungsseite werden deterministische und stochastische Systeme, Chaos, Skalierung und das Entstehen von Komplexität durch Wechselwirkung diskutiert - sowohl für diskrete als auch für ausgedehnte Systeme. Die beiden Ansätze werden durch Systemanalyse jeweils passender Beispiele vereint.
Komplexe Systeme reichen von "harten", physikalischen, wie Klimaphysik, Turbulenz in Fluiden oder Plasmen bis zu so genannten "weichen", wie man sie in der Biologie, der Physik weicher Materie, Soziologie oder Ökonomie findet. Die Ausbildung von Verständnis zu einem solchen System beinhaltet eine Beschreibung in Form von Statistiken und schlussendlich mathematischen Gleichungen. Moderne Datenanalyse stellt eine große Menge von Werkzeugen zur Analyse von Komplexität auf verschiedenen Beschreibungsebenen bereit. In diesem Kurs werden statistische Methoden mit einem Schwerpunkt auf dynamischen Systemen diskutiert und eingeübt. Auf der methodischen Seite werden lineare und nichtlineare Ansätze behandelt, inklusive der Standard-Werkzeuge der deskriptiven und schlussfolgernden Statistik, Wavelet Analyse, Nichtparametrische Regression und der Schätzung nichtlinearer Maße wie fraktaler Dimensionen, Entropien und Komplexitätsmaßen. Auf der Modellierungsseite werden deterministische und stochastische Systeme, Chaos, Skalierung und das Entstehen von Komplexität durch Wechselwirkung diskutiert - sowohl für diskrete als auch für ausgedehnte Systeme. Die beiden Ansätze werden durch Systemanalyse jeweils passender Beispiele vereint.
Komplexe Systeme reichen von "harten", physikalischen, wie Klimaphysik, Turbulenz in Fluiden oder Plasmen bis zu so genannten "weichen", wie man sie in der Biologie, der Physik weicher Materie, Soziologie oder Ökonomie findet. Die Ausbildung von Verständnis zu einem solchen System beinhaltet eine Beschreibung in Form von Statistiken und schlussendlich mathematischen Gleichungen. Moderne Datenanalyse stellt eine große Menge von Werkzeugen zur Analyse von Komplexität auf verschiedenen Beschreibungsebenen bereit. In diesem Kurs werden statistische Methoden mit einem Schwerpunkt auf dynamischen Systemen diskutiert und eingeübt. Auf der methodischen Seite werden lineare und nichtlineare Ansätze behandelt, inklusive der Standard-Werkzeuge der deskriptiven und schlussfolgernden Statistik, Wavelet Analyse, Nichtparametrische Regression und der Schätzung nichtlinearer Maße wie fraktaler Dimensionen, Entropien und Komplexitätsmaßen. Auf der Modellierungsseite werden deterministische und stochastische Systeme, Chaos, Skalierung und das Entstehen von Komplexität durch Wechselwirkung diskutiert - sowohl für diskrete als auch für ausgedehnte Systeme. Die beiden Ansätze werden durch Systemanalyse jeweils passender Beispiele vereint.
Komplexe Systeme reichen von "harten", physikalischen, wie Klimaphysik, Turbulenz in Fluiden oder Plasmen bis zu so genannten "weichen", wie man sie in der Biologie, der Physik weicher Materie, Soziologie oder Ökonomie findet. Die Ausbildung von Verständnis zu einem solchen System beinhaltet eine Beschreibung in Form von Statistiken und schlussendlich mathematischen Gleichungen. Moderne Datenanalyse stellt eine große Menge von Werkzeugen zur Analyse von Komplexität auf verschiedenen Beschreibungsebenen bereit. In diesem Kurs werden statistische Methoden mit einem Schwerpunkt auf dynamischen Systemen diskutiert und eingeübt. Auf der methodischen Seite werden lineare und nichtlineare Ansätze behandelt, inklusive der Standard-Werkzeuge der deskriptiven und schlussfolgernden Statistik, Wavelet Analyse, Nichtparametrische Regression und der Schätzung nichtlinearer Maße wie fraktaler Dimensionen, Entropien und Komplexitätsmaßen. Auf der Modellierungsseite werden deterministische und stochastische Systeme, Chaos, Skalierung und das Entstehen von Komplexität durch Wechselwirkung diskutiert - sowohl für diskrete als auch für ausgedehnte Systeme. Die beiden Ansätze werden durch Systemanalyse jeweils passender Beispiele vereint.
Komplexe Systeme reichen von "harten", physikalischen, wie Klimaphysik, Turbulenz in Fluiden oder Plasmen bis zu so genannten "weichen", wie man sie in der Biologie, der Physik weicher Materie, Soziologie oder Ökonomie findet. Die Ausbildung von Verständnis zu einem solchen System beinhaltet eine Beschreibung in Form von Statistiken und schlussendlich mathematischen Gleichungen. Moderne Datenanalyse stellt eine große Menge von Werkzeugen zur Analyse von Komplexität auf verschiedenen Beschreibungsebenen bereit. In diesem Kurs werden statistische Methoden mit einem Schwerpunkt auf dynamischen Systemen diskutiert und eingeübt. Auf der methodischen Seite werden lineare und nichtlineare Ansätze behandelt, inklusive der Standard-Werkzeuge der deskriptiven und schlussfolgernden Statistik, Wavelet Analyse, Nichtparametrische Regression und der Schätzung nichtlinearer Maße wie fraktaler Dimensionen, Entropien und Komplexitätsmaßen. Auf der Modellierungsseite werden deterministische und stochastische Systeme, Chaos, Skalierung und das Entstehen von Komplexität durch Wechselwirkung diskutiert - sowohl für diskrete als auch für ausgedehnte Systeme. Die beiden Ansätze werden durch Systemanalyse jeweils passender Beispiele vereint.
Komplexe Systeme reichen von "harten", physikalischen, wie Klimaphysik, Turbulenz in Fluiden oder Plasmen bis zu so genannten "weichen", wie man sie in der Biologie, der Physik weicher Materie, Soziologie oder Ökonomie findet. Die Ausbildung von Verständnis zu einem solchen System beinhaltet eine Beschreibung in Form von Statistiken und schlussendlich mathematischen Gleichungen. Moderne Datenanalyse stellt eine große Menge von Werkzeugen zur Analyse von Komplexität auf verschiedenen Beschreibungsebenen bereit. In diesem Kurs werden statistische Methoden mit einem Schwerpunkt auf dynamischen Systemen diskutiert und eingeübt. Auf der methodischen Seite werden lineare und nichtlineare Ansätze behandelt, inklusive der Standard-Werkzeuge der deskriptiven und schlussfolgernden Statistik, Wavelet Analyse, Nichtparametrische Regression und der Schätzung nichtlinearer Maße wie fraktaler Dimensionen, Entropien und Komplexitätsmaßen. Auf der Modellierungsseite werden deterministische und stochastische Systeme, Chaos, Skalierung und das Entstehen von Komplexität durch Wechselwirkung diskutiert - sowohl für diskrete als auch für ausgedehnte Systeme. Die beiden Ansätze werden durch Systemanalyse jeweils passender Beispiele vereint.
Komplexe Systeme reichen von "harten", physikalischen, wie Klimaphysik, Turbulenz in Fluiden oder Plasmen bis zu so genannten "weichen", wie man sie in der Biologie, der Physik weicher Materie, Soziologie oder Ökonomie findet. Die Ausbildung von Verständnis zu einem solchen System beinhaltet eine Beschreibung in Form von Statistiken und schlussendlich mathematischen Gleichungen. Moderne Datenanalyse stellt eine große Menge von Werkzeugen zur Analyse von Komplexität auf verschiedenen Beschreibungsebenen bereit. In diesem Kurs werden statistische Methoden mit einem Schwerpunkt auf dynamischen Systemen diskutiert und eingeübt. Auf der methodischen Seite werden lineare und nichtlineare Ansätze behandelt, inklusive der Standard-Werkzeuge der deskriptiven und schlussfolgernden Statistik, Wavelet Analyse, Nichtparametrische Regression und der Schätzung nichtlinearer Maße wie fraktaler Dimensionen, Entropien und Komplexitätsmaßen. Auf der Modellierungsseite werden deterministische und stochastische Systeme, Chaos, Skalierung und das Entstehen von Komplexität durch Wechselwirkung diskutiert - sowohl für diskrete als auch für ausgedehnte Systeme. Die beiden Ansätze werden durch Systemanalyse jeweils passender Beispiele vereint.
Komplexe Systeme reichen von "harten", physikalischen, wie Klimaphysik, Turbulenz in Fluiden oder Plasmen bis zu so genannten "weichen", wie man sie in der Biologie, der Physik weicher Materie, Soziologie oder Ökonomie findet. Die Ausbildung von Verständnis zu einem solchen System beinhaltet eine Beschreibung in Form von Statistiken und schlussendlich mathematischen Gleichungen. Moderne Datenanalyse stellt eine große Menge von Werkzeugen zur Analyse von Komplexität auf verschiedenen Beschreibungsebenen bereit. In diesem Kurs werden statistische Methoden mit einem Schwerpunkt auf dynamischen Systemen diskutiert und eingeübt. Auf der methodischen Seite werden lineare und nichtlineare Ansätze behandelt, inklusive der Standard-Werkzeuge der deskriptiven und schlussfolgernden Statistik, Wavelet Analyse, Nichtparametrische Regression und der Schätzung nichtlinearer Maße wie fraktaler Dimensionen, Entropien und Komplexitätsmaßen. Auf der Modellierungsseite werden deterministische und stochastische Systeme, Chaos, Skalierung und das Entstehen von Komplexität durch Wechselwirkung diskutiert - sowohl für diskrete als auch für ausgedehnte Systeme. Die beiden Ansätze werden durch Systemanalyse jeweils passender Beispiele vereint.
Komplexe Systeme reichen von "harten", physikalischen, wie Klimaphysik, Turbulenz in Fluiden oder Plasmen bis zu so genannten "weichen", wie man sie in der Biologie, der Physik weicher Materie, Soziologie oder Ökonomie findet. Die Ausbildung von Verständnis zu einem solchen System beinhaltet eine Beschreibung in Form von Statistiken und schlussendlich mathematischen Gleichungen. Moderne Datenanalyse stellt eine große Menge von Werkzeugen zur Analyse von Komplexität auf verschiedenen Beschreibungsebenen bereit. In diesem Kurs werden statistische Methoden mit einem Schwerpunkt auf dynamischen Systemen diskutiert und eingeübt. Auf der methodischen Seite werden lineare und nichtlineare Ansätze behandelt, inklusive der Standard-Werkzeuge der deskriptiven und schlussfolgernden Statistik, Wavelet Analyse, Nichtparametrische Regression und der Schätzung nichtlinearer Maße wie fraktaler Dimensionen, Entropien und Komplexitätsmaßen. Auf der Modellierungsseite werden deterministische und stochastische Systeme, Chaos, Skalierung und das Entstehen von Komplexität durch Wechselwirkung diskutiert - sowohl für diskrete als auch für ausgedehnte Systeme. Die beiden Ansätze werden durch Systemanalyse jeweils passender Beispiele vereint.
In the estimate of dispersion with the help of wavelet analysis considerable emphasis has been put on the extraction of the group velocity using the modulus of the wavelet transform. In this paper we give an asymptotic expression of the full propagator in wavelet space that comprises the phase velocity as well. This operator establishes a relationship between the observed signals at two different stations during wave propagation in a dispersive and attenuating medium. Numerical and experimental examples are presented to show that the method accurately models seismic wave dispersion and attenuation
The additional magnetic field produced by the ionospheric current system is a part of the Earth’s magnetic field. This current system is a highly variable part of a global electric circuit. The solar wind and interplanetary magnetic field (IMF) interaction with the Earth’s magnetosphere is the external driver for the global electric circuit in the ionosphere. The energy is transferred via the field-aligned currents (FACs) to the Earth’s ionosphere. The interactions between the neutral and charged particles in the ionosphere lead to the so-called thermospheric neutral wind dynamo which represents the second important driver for the global current system. Both processes are components of the magnetosphere–ionosphere–thermosphere (MIT) system, which depends on solar and geomagnetic conditions, and have significant seasonal and UT variations.
The modeling of the global dynamic Earth’s ionospheric current system is the first aim of this investigation. For our study, we use the Potsdam version of the Upper Atmosphere Model (UAM-P). The UAM is a first-principle, time-dependent, and fully self-consistent numerical global model. The model includes the thermosphere, ionosphere, plasmasphere, and inner magnetosphere as well as the electrodynamics of the coupled MIT system for the altitudinal range from 80 (60) km up to the 15 Earth radii. The UAM-P differs from the UAM by a new electric field block. For this study, the lower latitudinal and equatorial electrodynamics of the UAM-P model was improved.
The calculation of the ionospheric current system’s contribution to the Earth’s magnetic field is the second aim of this study. We present the method, which allows computing the additional magnetic field inside and outside the current layer as generated by the space current density distribution using the Biot-Savart law. Additionally, we perform a comparison of the additional magnetic field calculation using 2D (equivalent currents) and 3D current distribution.
We propose a reduced dynamical system describing the coupled evolution of fluid flow and magnetic field at the top of the Earth's core between the years 1900 and 2014. The flow evolution is modeled with a first-order autoregressive process, while the magnetic field obeys the classical frozen flux equation. An ensemble Kalman filter algorithm serves to constrain the dynamics with the geomagnetic field and its secular variation given by the COV-OBS.x1 model. Using a large ensemble with 40,000 members provides meaningful statistics including reliable error estimates. The model highlights two distinct flow scales. Slowly varying large-scale elements include the already documented eccentric gyre. Localized short-lived structures include distinctly ageostophic features like the high-latitude polar jet on the Northern Hemisphere. Comparisons with independent observations of the length-of-day variations not only validate the flow estimates but also suggest an acceleration of the geostrophic flows over the last century. Hindcasting tests show that our model outperforms simpler predictions bases (linear extrapolation and stationary flow). The predictability limit, of about 2,000 years for the magnetic dipole component, is mostly determined by the random fast varying dynamics of the flow and much less by the geomagnetic data quality or lack of small-scale information.
Borehole logs provide geological information about the rocks crossed by the wells. Several properties of rocks can be interpreted in terms of lithology, type and quantity of the fluid filling the pores and fractures. Here, the logs are assumed to be nonhomogeneous Brownian motions (nhBms) which are generalized fractional Brownian motions (fBms) indexed by depth-dependent Hurst parameters H(z). Three techniques, the local wavelet approach (LWA), the average-local wavelet approach (ALWA), and Peltier Algorithm (PA), are suggested to estimate the Hurst functions (or the regularity profiles) from the logs. First, two synthetic sonic logs with different parameters, shaped by the successive random additions (SRA) algorithm, are used to demonstrate the potential of the proposed methods. The obtained Hurst functions are close to the theoretical Hurst functions. Besides, the transitions between the modeled layers are marked by Hurst values discontinuities. It is also shown that PA leads to the best Hurst value estimations. Second, we investigate the multifractional property of sonic logs data recorded at two scientific deep boreholes: the pilot hole VB and the ultra deep main hole HB, drilled for the German Continental Deep Drilling Program (KTB). All the regularity profiles independently obtained for the logs provide a clear correlation with lithology, and from each regularity profile, we derive a similar segmentation in terms of lithological units. The lithological discontinuities (strata' bounds and faults contacts) are located at the local extrema of the Hurst functions. Moreover, the regularity profiles are compared with the KTB estimated porosity logs, showing a significant relation between the local extrema of the Hurst functions and the fluid-filled fractures. The Hurst function may then constitute a tool to characterize underground heterogeneities.