Refine
Has Fulltext
- no (88) (remove)
Year of publication
Document Type
- Article (74)
- Monograph/Edited Volume (10)
- Other (3)
- Conference Proceeding (1)
Is part of the Bibliography
- yes (88)
Keywords
- Geomagnetic field (3)
- Wavelet transform (3)
- Geopotential theory (2)
- Kalman filter (2)
- Probabilistic forecasting (2)
- Satellite geodesy (2)
- AFM (1)
- Assimilation (1)
- Bayesian inference (1)
- Bayesian inversion (1)
Seismic hazard evaluation is proposed by a methodological approach that allows the study of the influence of different modelling assumptions relative to the spatial and temporal distribution of earthquakes on the maximum values of expected intensities. In particular, we show that the estimated hazard at a fixed point is very sensitive to the assumed spatial distribution of epicentres and their estimators. As we will see, the usual approach, based on uniformly distributing the epicentres inside each seismogenic zone is likely to be biased towards lower expected intensity values. This will be made more precise later. Recall that the term "bias" means, that the expectation of the estimated quantity ( taken as a random variable on the space of statistics) is different from the expectation of the quantity itself. Instead, our approach, based on an estimator that takes into account the observed clustering of events is essentially unbiased, as shown by a Monte-Carlo simulation, and is configured on a 11011-isotropic macroseismic attenuation model which is independently estimated for each zone
In the estimate of dispersion with the help of wavelet analysis considerable emphasis has been put on the extraction of the group velocity using the modulus of the wavelet transform. In this paper we give an asymptotic expression of the full propagator in wavelet space that comprises the phase velocity as well. This operator establishes a relationship between the observed signals at two different stations during wave propagation in a dispersive and attenuating medium. Numerical and experimental examples are presented to show that the method accurately models seismic wave dispersion and attenuation
Potential fields are classically represented on the sphere using spherical harmonics. However, this decomposition leads to numerical difficulties when data to be modelled are irregularly distributed or cover a regional zone. To overcome this drawback, we develop a new representation of the magnetic and the gravity fields based on wavelet frames. In this paper, we first describe how to build wavelet frames on the sphere. The chosen frames are based on the Poisson multipole wavelets, which are of special interest for geophysical modelling, since their scaling parameter is linked to the multipole depth (Holschneider et al.). The implementation of wavelet frames results from a discretization of the continuous wavelet transform in space and scale. We also build different frames using two kinds of spherical meshes and various scale sequences. We then validate the mathematical method through simple fits of scalar functions on the sphere, named 'scalar models'. Moreover, we propose magnetic and gravity models, referred to as 'vectorial models', taking into account geophysical constraints. We then discuss the representation of the Earth's magnetic and gravity fields from data regularly or irregularly distributed. Comparisons of the obtained wavelet models with the initial spherical harmonic models point out the advantages of wavelet modelling when the used magnetic or gravity data are sparsely distributed or cover just a very local zone
In this paper, we propose a method of surface waves characterization based on the deformation of the wavelet transform of the analysed signal. An estimate of the phase velocity (the group velocity) and the attenuation coefficient is carried out using a model-based approach to determine the propagation operator in the wavelet domain, which depends nonlinearly on a set of unknown parameters. These parameters explicitly define the phase velocity, the group velocity and the attenuation. Under the assumption that the difference between waveforms observed at a couple of stations is solely due to the dispersion characteristics and the intrinsic attenuation of the medium, we then seek to find the set of unknown parameters of this model. Finding the model parameters turns out to be that of an optimization problem, which is solved through the minimization of an appropriately defined cost function. We show that, unlike time-frequency methods that exploit only the square modulus of the transform, we can achieve a complete characterization of surface waves in a dispersive and attenuating medium. Using both synthetic examples and experimental data, we also show that it is in principle possible to separate different modes in both the time domain and the frequency domain
We investigate the influence of spatial heterogeneities on various aspects of brittle failure and seismicity in a model of a large strike-slip fault. The model dynamics is governed by realistic boundary conditions consisting of constant velocity motion of regions around the fault, static/kinetic friction laws, creep with depth-dependent coefficients, and 3-D elastic stress transfer. The dynamic rupture is approximated on a continuous time scale using a finite stress propagation velocity ("quasidynamic model''). The model produces a "brittle- ductile'' transition at a depth of about 12.5 km, realistic hypocenter distributions, and other features of seismicity compatible with observations. Previous work suggested that the range of size scales in the distribution of strength-stress heterogeneities acts as a tuning parameter of the dynamics. Here we test this hypothesis by performing a systematic parameter-space study with different forms of heterogeneities. In particular, we analyze spatial heterogeneities that can be tuned by a single parameter in two distributions: ( 1) high stress drop barriers in near- vertical directions and ( 2) spatial heterogeneities with fractal properties and variable fractal dimension. The results indicate that the first form of heterogeneities provides an effective means of tuning the behavior while the second does not. In relatively homogeneous cases, the fault self-organizes to large-scale patches and big events are associated with inward failure of individual patches and sequential failures of different patches. The frequency-size event statistics in such cases are compatible with the characteristic earthquake distribution and large events are quasi-periodic in time. In strongly heterogeneous or near-critical cases, the rupture histories are highly discontinuous and consist of complex migration patterns of slip on the fault. In such cases, the frequency-size and temporal statistics follow approximately power-law relations
We introduce a method of wavefield separation from multicomponent data sets based on the use of the continuous wavelet transform. Our method is a further generalization of the approach proposed by Morozov and Smithson, in that by using the continuous wavelet transform, we can achieve a better separation of wave types by designing the filter in the time-frequency domain. Furthermore, using the instantaneous polarization attributes defined in the wavelet domain, we show how to construct filters tailored to separate different wave types (elliptically or linearly polarized), followed by an inverse wavelet transform to obtain the desired wave type in the time domain. Using synthetic and experimental data, we show how the present method can be used for wavefield separation
Aftershocks rates seem to follow a power law decay, but the question of the aftershock frequency immediately after an earthquake remains open. We estimate an average aftershock decay rate within one day in southern California by stacking in time different sequences triggered by main shocks ranging in magnitude from 2.5 to 4.5. Then we estimate the time delay before the onset of the power law aftershock decay rate. For the last 20 years, we observe that this time delay suddenly increase after large earthquakes, and slowly decreases at a constant rate during periods of low seismicity. In a band-limited power law model such variations can be explained by different patterns of stress distribution at different stages of the seismic cycle. We conclude that, on regional length scales, the brittle upper crust exhibits a collective behavior reflecting to some extent the proximity of a threshold of fracturing
This paper is devoted to the digital processing of multicomponent seismograms using wavelet analysis. The goal of this processing is to identify Rayleigh surface elastic waves and determine their properties. A new method for calculating the ellipticity parameters of a wave in the form of a time-frequency spectrum is proposed, which offers wide possibilities for filtering seismic signals in order to suppress or extract the Rayleigh components. A model of dispersion and dissipation of elliptic waves written in terms of wavelet spectra of complex (two-component) signals is also proposed. The model is used to formulate a nonlinear minimization problem that allows for a high-accuracy calculation of the group and phase velocities and the attenuation factor for a propagating elliptic Rayleigh wave. All methods considered in the paper are illustrated with the use of test signals. (c) 2005 Pleiades Publishing, Inc
We show that realistic aftershock sequences with space-time characteristics compatible with observations are generated by a model consisting of brittle fault segments separated by creeping zones. The dynamics of the brittle regions is governed by static/kinetic friction, 3D elastic stress transfer and small creep deformation. The creeping parts are characterized by high ongoing creep velocities. These regions store stress during earthquake failures and then release it in the interseismic periods. The resulting postseismic deformation leads to aftershock sequences following the modified Omori law. The ratio of creep coefficients in the brittle and creeping sections determines the duration of the postseismic transients and the exponent p of the modified Omori law
[ 1] In this paper, we discuss the origin of superswell volcanism on the basis of representation and analysis of recent gravity and magnetic satellite data with wavelets in spherical geometry. We computed a refined gravity field in the south central Pacific based on the GRACE satellite GGM02S global gravity field and the KMS02 altimetric grid, and a magnetic anomaly field based on CHAMP data. The magnetic anomalies are marked by the magnetic lineation of the seafloor spreading and by a strong anomaly in the Tuamotu region, which we interpret as evidence for crustal thickening. We interpret our gravity field through a continuous wavelet analysis that allows to get a first idea of the internal density distribution. We also compute the continuous wavelet analysis of the bathymetric contribution to discriminate between deep and superficial sources. According to the gravity signature of the different chains as revealed by our analysis, various processes are at the origin of the volcanism in French Polynesia. As evidence, we show a large-scale anomaly over the Society Islands that we interpret as the gravity signature of a deeply anchored mantle plume. The gravity signature of the Cook-Austral chain indicates a complex origin which may involve deep processes. Finally, we discuss the particular location of the Marquesas chain as suggesting that the origin of the volcanism may interfere with secondary convection rolls or may be controlled by lithospheric weakness due to the regional stress field, or else related to the presence of the nearby Tuamotu plateau.
We introduce a method for computing instantaneous-polarization attributes from multicomponent signals. This is an improvement on the standard covariance method (SCM) because it does not depend on the window size used to compute the standard covariance matrix. We overcome the window-size problem by deriving an approximate analytical formula for the cross-energy matrix in which we automatically and adaptively determine the time window. The proposed method uses polarization analysis as applied to multicomponent seismic by waveform separation and filtering.
The parameters of the nutations are now known with a good accuracy, and the theory accounts for most of their values. Dissipative friction at the core-mantle boundary (CMB) and at the inner core boundary is an important ingredient of the theory. Up to now, viscous coupling at a smooth interface and electromagnetic coupling have been considered. In some cases they appear hardly strong enough to account for the observations. We advocate here that the CMB has a small- scale roughness and estimate the dissipation resulting from the interaction of the fluid core motion with this topography. We conclude that it might be significant
Characterization of polarization attributes of seismic waves using continuous wavelet transforms
(2006)
Complex-trace analysis is the method of choice for analyzing polarized data. Because particle motion can be represented by instantaneous attributes that show distinct features for waves of different polarization characteristics, it can be used to separate and characterize these waves. Traditional methods of complex-trace analysis only give the instantaneous attributes as a function of time or frequency. However. for transient wave types or seismic events that overlap in time, an estimate of the polarization parameters requires analysis of the time-frequency dependence of these attributes. We propose a method to map instantaneous polarization attributes of seismic signals in the wavelet domain and explicitly relate these attributes with the wavelet-transform coefficients of the analyzed signal. We compare our method with traditional complex-trace analysis using numerical examples. An advantage of our method is its possibility of performing the complete wave-mode separation/ filtering process in the wavelet domain and its ability to provide the frequency dependence of ellipticity, which contains important information on the subsurface structure. Furthermore, using 2-C synthetic and real seismic shot gathers, we show how to use the method to separate different wave types and identify zones of interfering wave modes
This paper is concerned with localization properties of coherent states. Instead of classical uncertainty relations we consider "generalized" localization quantities. This is done by introducing measures on the reproducing kernel. In this context we may prove the existence of optimally localized states. Moreover, we provide a numerical scheme for deriving them.
Komplexe Systeme reichen von "harten", physikalischen, wie Klimaphysik, Turbulenz in Fluiden oder Plasmen bis zu so genannten "weichen", wie man sie in der Biologie, der Physik weicher Materie, Soziologie oder Ökonomie findet. Die Ausbildung von Verständnis zu einem solchen System beinhaltet eine Beschreibung in Form von Statistiken und schlussendlich mathematischen Gleichungen. Moderne Datenanalyse stellt eine große Menge von Werkzeugen zur Analyse von Komplexität auf verschiedenen Beschreibungsebenen bereit. In diesem Kurs werden statistische Methoden mit einem Schwerpunkt auf dynamischen Systemen diskutiert und eingeübt. Auf der methodischen Seite werden lineare und nichtlineare Ansätze behandelt, inklusive der Standard-Werkzeuge der deskriptiven und schlussfolgernden Statistik, Wavelet Analyse, Nichtparametrische Regression und der Schätzung nichtlinearer Maße wie fraktaler Dimensionen, Entropien und Komplexitätsmaßen. Auf der Modellierungsseite werden deterministische und stochastische Systeme, Chaos, Skalierung und das Entstehen von Komplexität durch Wechselwirkung diskutiert - sowohl für diskrete als auch für ausgedehnte Systeme. Die beiden Ansätze werden durch Systemanalyse jeweils passender Beispiele vereint.
Komplexe Systeme reichen von "harten", physikalischen, wie Klimaphysik, Turbulenz in Fluiden oder Plasmen bis zu so genannten "weichen", wie man sie in der Biologie, der Physik weicher Materie, Soziologie oder Ökonomie findet. Die Ausbildung von Verständnis zu einem solchen System beinhaltet eine Beschreibung in Form von Statistiken und schlussendlich mathematischen Gleichungen. Moderne Datenanalyse stellt eine große Menge von Werkzeugen zur Analyse von Komplexität auf verschiedenen Beschreibungsebenen bereit. In diesem Kurs werden statistische Methoden mit einem Schwerpunkt auf dynamischen Systemen diskutiert und eingeübt. Auf der methodischen Seite werden lineare und nichtlineare Ansätze behandelt, inklusive der Standard-Werkzeuge der deskriptiven und schlussfolgernden Statistik, Wavelet Analyse, Nichtparametrische Regression und der Schätzung nichtlinearer Maße wie fraktaler Dimensionen, Entropien und Komplexitätsmaßen. Auf der Modellierungsseite werden deterministische und stochastische Systeme, Chaos, Skalierung und das Entstehen von Komplexität durch Wechselwirkung diskutiert - sowohl für diskrete als auch für ausgedehnte Systeme. Die beiden Ansätze werden durch Systemanalyse jeweils passender Beispiele vereint.
Komplexe Systeme reichen von "harten", physikalischen, wie Klimaphysik, Turbulenz in Fluiden oder Plasmen bis zu so genannten "weichen", wie man sie in der Biologie, der Physik weicher Materie, Soziologie oder Ökonomie findet. Die Ausbildung von Verständnis zu einem solchen System beinhaltet eine Beschreibung in Form von Statistiken und schlussendlich mathematischen Gleichungen. Moderne Datenanalyse stellt eine große Menge von Werkzeugen zur Analyse von Komplexität auf verschiedenen Beschreibungsebenen bereit. In diesem Kurs werden statistische Methoden mit einem Schwerpunkt auf dynamischen Systemen diskutiert und eingeübt. Auf der methodischen Seite werden lineare und nichtlineare Ansätze behandelt, inklusive der Standard-Werkzeuge der deskriptiven und schlussfolgernden Statistik, Wavelet Analyse, Nichtparametrische Regression und der Schätzung nichtlinearer Maße wie fraktaler Dimensionen, Entropien und Komplexitätsmaßen. Auf der Modellierungsseite werden deterministische und stochastische Systeme, Chaos, Skalierung und das Entstehen von Komplexität durch Wechselwirkung diskutiert - sowohl für diskrete als auch für ausgedehnte Systeme. Die beiden Ansätze werden durch Systemanalyse jeweils passender Beispiele vereint.
Komplexe Systeme reichen von "harten", physikalischen, wie Klimaphysik, Turbulenz in Fluiden oder Plasmen bis zu so genannten "weichen", wie man sie in der Biologie, der Physik weicher Materie, Soziologie oder Ökonomie findet. Die Ausbildung von Verständnis zu einem solchen System beinhaltet eine Beschreibung in Form von Statistiken und schlussendlich mathematischen Gleichungen. Moderne Datenanalyse stellt eine große Menge von Werkzeugen zur Analyse von Komplexität auf verschiedenen Beschreibungsebenen bereit. In diesem Kurs werden statistische Methoden mit einem Schwerpunkt auf dynamischen Systemen diskutiert und eingeübt. Auf der methodischen Seite werden lineare und nichtlineare Ansätze behandelt, inklusive der Standard-Werkzeuge der deskriptiven und schlussfolgernden Statistik, Wavelet Analyse, Nichtparametrische Regression und der Schätzung nichtlinearer Maße wie fraktaler Dimensionen, Entropien und Komplexitätsmaßen. Auf der Modellierungsseite werden deterministische und stochastische Systeme, Chaos, Skalierung und das Entstehen von Komplexität durch Wechselwirkung diskutiert - sowohl für diskrete als auch für ausgedehnte Systeme. Die beiden Ansätze werden durch Systemanalyse jeweils passender Beispiele vereint.
Komplexe Systeme reichen von "harten", physikalischen, wie Klimaphysik, Turbulenz in Fluiden oder Plasmen bis zu so genannten "weichen", wie man sie in der Biologie, der Physik weicher Materie, Soziologie oder Ökonomie findet. Die Ausbildung von Verständnis zu einem solchen System beinhaltet eine Beschreibung in Form von Statistiken und schlussendlich mathematischen Gleichungen. Moderne Datenanalyse stellt eine große Menge von Werkzeugen zur Analyse von Komplexität auf verschiedenen Beschreibungsebenen bereit. In diesem Kurs werden statistische Methoden mit einem Schwerpunkt auf dynamischen Systemen diskutiert und eingeübt. Auf der methodischen Seite werden lineare und nichtlineare Ansätze behandelt, inklusive der Standard-Werkzeuge der deskriptiven und schlussfolgernden Statistik, Wavelet Analyse, Nichtparametrische Regression und der Schätzung nichtlinearer Maße wie fraktaler Dimensionen, Entropien und Komplexitätsmaßen. Auf der Modellierungsseite werden deterministische und stochastische Systeme, Chaos, Skalierung und das Entstehen von Komplexität durch Wechselwirkung diskutiert - sowohl für diskrete als auch für ausgedehnte Systeme. Die beiden Ansätze werden durch Systemanalyse jeweils passender Beispiele vereint.
Komplexe Systeme reichen von "harten", physikalischen, wie Klimaphysik, Turbulenz in Fluiden oder Plasmen bis zu so genannten "weichen", wie man sie in der Biologie, der Physik weicher Materie, Soziologie oder Ökonomie findet. Die Ausbildung von Verständnis zu einem solchen System beinhaltet eine Beschreibung in Form von Statistiken und schlussendlich mathematischen Gleichungen. Moderne Datenanalyse stellt eine große Menge von Werkzeugen zur Analyse von Komplexität auf verschiedenen Beschreibungsebenen bereit. In diesem Kurs werden statistische Methoden mit einem Schwerpunkt auf dynamischen Systemen diskutiert und eingeübt. Auf der methodischen Seite werden lineare und nichtlineare Ansätze behandelt, inklusive der Standard-Werkzeuge der deskriptiven und schlussfolgernden Statistik, Wavelet Analyse, Nichtparametrische Regression und der Schätzung nichtlinearer Maße wie fraktaler Dimensionen, Entropien und Komplexitätsmaßen. Auf der Modellierungsseite werden deterministische und stochastische Systeme, Chaos, Skalierung und das Entstehen von Komplexität durch Wechselwirkung diskutiert - sowohl für diskrete als auch für ausgedehnte Systeme. Die beiden Ansätze werden durch Systemanalyse jeweils passender Beispiele vereint.
Komplexe Systeme reichen von "harten", physikalischen, wie Klimaphysik, Turbulenz in Fluiden oder Plasmen bis zu so genannten "weichen", wie man sie in der Biologie, der Physik weicher Materie, Soziologie oder Ökonomie findet. Die Ausbildung von Verständnis zu einem solchen System beinhaltet eine Beschreibung in Form von Statistiken und schlussendlich mathematischen Gleichungen. Moderne Datenanalyse stellt eine große Menge von Werkzeugen zur Analyse von Komplexität auf verschiedenen Beschreibungsebenen bereit. In diesem Kurs werden statistische Methoden mit einem Schwerpunkt auf dynamischen Systemen diskutiert und eingeübt. Auf der methodischen Seite werden lineare und nichtlineare Ansätze behandelt, inklusive der Standard-Werkzeuge der deskriptiven und schlussfolgernden Statistik, Wavelet Analyse, Nichtparametrische Regression und der Schätzung nichtlinearer Maße wie fraktaler Dimensionen, Entropien und Komplexitätsmaßen. Auf der Modellierungsseite werden deterministische und stochastische Systeme, Chaos, Skalierung und das Entstehen von Komplexität durch Wechselwirkung diskutiert - sowohl für diskrete als auch für ausgedehnte Systeme. Die beiden Ansätze werden durch Systemanalyse jeweils passender Beispiele vereint.
Komplexe Systeme reichen von "harten", physikalischen, wie Klimaphysik, Turbulenz in Fluiden oder Plasmen bis zu so genannten "weichen", wie man sie in der Biologie, der Physik weicher Materie, Soziologie oder Ökonomie findet. Die Ausbildung von Verständnis zu einem solchen System beinhaltet eine Beschreibung in Form von Statistiken und schlussendlich mathematischen Gleichungen. Moderne Datenanalyse stellt eine große Menge von Werkzeugen zur Analyse von Komplexität auf verschiedenen Beschreibungsebenen bereit. In diesem Kurs werden statistische Methoden mit einem Schwerpunkt auf dynamischen Systemen diskutiert und eingeübt. Auf der methodischen Seite werden lineare und nichtlineare Ansätze behandelt, inklusive der Standard-Werkzeuge der deskriptiven und schlussfolgernden Statistik, Wavelet Analyse, Nichtparametrische Regression und der Schätzung nichtlinearer Maße wie fraktaler Dimensionen, Entropien und Komplexitätsmaßen. Auf der Modellierungsseite werden deterministische und stochastische Systeme, Chaos, Skalierung und das Entstehen von Komplexität durch Wechselwirkung diskutiert - sowohl für diskrete als auch für ausgedehnte Systeme. Die beiden Ansätze werden durch Systemanalyse jeweils passender Beispiele vereint.
Komplexe Systeme reichen von "harten", physikalischen, wie Klimaphysik, Turbulenz in Fluiden oder Plasmen bis zu so genannten "weichen", wie man sie in der Biologie, der Physik weicher Materie, Soziologie oder Ökonomie findet. Die Ausbildung von Verständnis zu einem solchen System beinhaltet eine Beschreibung in Form von Statistiken und schlussendlich mathematischen Gleichungen. Moderne Datenanalyse stellt eine große Menge von Werkzeugen zur Analyse von Komplexität auf verschiedenen Beschreibungsebenen bereit. In diesem Kurs werden statistische Methoden mit einem Schwerpunkt auf dynamischen Systemen diskutiert und eingeübt. Auf der methodischen Seite werden lineare und nichtlineare Ansätze behandelt, inklusive der Standard-Werkzeuge der deskriptiven und schlussfolgernden Statistik, Wavelet Analyse, Nichtparametrische Regression und der Schätzung nichtlinearer Maße wie fraktaler Dimensionen, Entropien und Komplexitätsmaßen. Auf der Modellierungsseite werden deterministische und stochastische Systeme, Chaos, Skalierung und das Entstehen von Komplexität durch Wechselwirkung diskutiert - sowohl für diskrete als auch für ausgedehnte Systeme. Die beiden Ansätze werden durch Systemanalyse jeweils passender Beispiele vereint.
Komplexe Systeme reichen von "harten", physikalischen, wie Klimaphysik, Turbulenz in Fluiden oder Plasmen bis zu so genannten "weichen", wie man sie in der Biologie, der Physik weicher Materie, Soziologie oder Ökonomie findet. Die Ausbildung von Verständnis zu einem solchen System beinhaltet eine Beschreibung in Form von Statistiken und schlussendlich mathematischen Gleichungen. Moderne Datenanalyse stellt eine große Menge von Werkzeugen zur Analyse von Komplexität auf verschiedenen Beschreibungsebenen bereit. In diesem Kurs werden statistische Methoden mit einem Schwerpunkt auf dynamischen Systemen diskutiert und eingeübt. Auf der methodischen Seite werden lineare und nichtlineare Ansätze behandelt, inklusive der Standard-Werkzeuge der deskriptiven und schlussfolgernden Statistik, Wavelet Analyse, Nichtparametrische Regression und der Schätzung nichtlinearer Maße wie fraktaler Dimensionen, Entropien und Komplexitätsmaßen. Auf der Modellierungsseite werden deterministische und stochastische Systeme, Chaos, Skalierung und das Entstehen von Komplexität durch Wechselwirkung diskutiert - sowohl für diskrete als auch für ausgedehnte Systeme. Die beiden Ansätze werden durch Systemanalyse jeweils passender Beispiele vereint.
We construct a family of admissible analysis reconstruction pairs of wavelet families on the sphere. The construction is an extension of the isotropic Poisson wavelets. Similar to those, the directional wavelets allow a finite expansion in terms of off-center multipoles. Unlike the isotropic case, the directional wavelets are not a tight frame. However, at small scales, they almost behave like a tight frame. We give an explicit formula for the pseudodifferential operator given by the combination analysis-synthesis with respect to these wavelets. The Euclidean limit is shown to exist and an explicit formula is given. This allows us to quantify the asymptotic angular resolution of the wavelets.
Borehole logs provide geological information about the rocks crossed by the wells. Several properties of rocks can be interpreted in terms of lithology, type and quantity of the fluid filling the pores and fractures. Here, the logs are assumed to be nonhomogeneous Brownian motions (nhBms) which are generalized fractional Brownian motions (fBms) indexed by depth-dependent Hurst parameters H(z). Three techniques, the local wavelet approach (LWA), the average-local wavelet approach (ALWA), and Peltier Algorithm (PA), are suggested to estimate the Hurst functions (or the regularity profiles) from the logs. First, two synthetic sonic logs with different parameters, shaped by the successive random additions (SRA) algorithm, are used to demonstrate the potential of the proposed methods. The obtained Hurst functions are close to the theoretical Hurst functions. Besides, the transitions between the modeled layers are marked by Hurst values discontinuities. It is also shown that PA leads to the best Hurst value estimations. Second, we investigate the multifractional property of sonic logs data recorded at two scientific deep boreholes: the pilot hole VB and the ultra deep main hole HB, drilled for the German Continental Deep Drilling Program (KTB). All the regularity profiles independently obtained for the logs provide a clear correlation with lithology, and from each regularity profile, we derive a similar segmentation in terms of lithological units. The lithological discontinuities (strata' bounds and faults contacts) are located at the local extrema of the Hurst functions. Moreover, the regularity profiles are compared with the KTB estimated porosity logs, showing a significant relation between the local extrema of the Hurst functions and the fluid-filled fractures. The Hurst function may then constitute a tool to characterize underground heterogeneities.
P>We present a statistical analysis of focal mechanism orientations for nine California fault zones with the goal of quantifying variations of fault zone heterogeneity at seismogenic depths. The focal mechanism data are generated from first motion polarities for earthquakes in the time period 1983-2004, magnitude range 0-5, and depth range 0-15 km. Only mechanisms with good quality solutions are used. We define fault zones using 20 km wide rectangles and use summations of normalized potency tensors to describe the distribution of double-couple orientations for each fault zone. Focal mechanism heterogeneity is quantified using two measures computed from the tensors that relate to the scatter in orientations and rotational asymmetry or skewness of the distribution. We illustrate the use of these quantities by showing relative differences in the focal mechanism heterogeneity characteristics for different fault zones. These differences are shown to relate to properties of the fault zone surface traces such that increased scatter correlates with fault trace complexity and rotational asymmetry correlates with the dominant fault trace azimuth. These correlations indicate a link between the long-term evolution of a fault zone over many earthquake cycles and its seismic behaviour over a 20 yr time period. Analysis of the partitioning of San Jacinto fault zone focal mechanisms into different faulting styles further indicates that heterogeneity is dominantly controlled by structural properties of the fault zone, rather than time or magnitude related properties of the seismicity.
We present a Bayesian method that allows continuous updating the aperiodicity of the recurrence time distribution of large earthquakes based on a catalog with magnitudes above a completeness threshold. The approach uses a recently proposed renewal model for seismicity and allows the inclusion of magnitude uncertainties in a straightforward manner. Errors accounting for grouped magnitudes and random errors are studied and discussed. The results indicate that a stable and realistic value of the aperiodicity can be predicted in an early state of seismicity evolution, even though only a small number of large earthquakes has occurred to date. Furthermore, we demonstrate that magnitude uncertainties can drastically influence the results and can therefore not be neglected. We show how to correct for the bias caused by magnitude errors. For the region of Parkfield we find that the aperiodicity, or the coefficient of variation, is clearly higher than in studies which are solely based on the large earthquakes.
We discuss to what extent a given earthquake catalog and the assumption of a doubly truncated Gutenberg-Richter distribution for the earthquake magnitudes allow for the calculation of confidence intervals for the maximum possible magnitude M. We show that, without further assumptions such as the existence of an upper bound of M, only very limited information may be obtained. In a frequentist formulation, for each confidence level alpha the confidence interval diverges with finite probability. In a Bayesian formulation, the posterior distribution of the upper magnitude is not normalizable. We conclude that the common approach to derive confidence intervals from the variance of a point estimator fails. Technically, this problem can be overcome by introducing an upper bound (M) over tilde for the maximum magnitude. Then the Bayesian posterior distribution can be normalized, and its variance decreases with the number of observed events. However, because the posterior depends significantly on the choice of the unknown value of (M) over tilde, the resulting confidence intervals are essentially meaningless. The use of an informative prior distribution accounting for pre-knowledge of M is also of little use, because the prior is only modified in the case of the occurrence of an extreme event. Our results suggest that the maximum possible magnitude M should be better replaced by M(T), the maximum expected magnitude in a given time interval T, for which the calculation of exact confidence intervals becomes straightforward. From a physical point of view, numerical models of the earthquake process adjusted to specific fault regions may be a powerful alternative to overcome the shortcomings of purely statistical inference.
We develop a multigrid, multiple time stepping scheme to reduce computational efforts for calculating complex stress interactions in a strike-slip 2D planar fault for the simulation of seismicity. The key elements of the multilevel solver are separation of length scale, grid-coarsening, and hierarchy. In this study the complex stress interactions are split into two parts: the first with a small contribution is computed on a coarse level, and the rest for strong interactions is on a fine level. This partition leads to a significant reduction of the number of computations. The reduction of complexity is even enhanced by combining the multigrid with multiple time stepping. Computational efficiency is enhanced by a factor of 10 while retaining a reasonable accuracy, compared to the original full matrix-vortex multiplication. The accuracy of solution and computational efficiency depend on a given cut-off radius that splits multiplications into the two parts. The multigrid scheme is constructed in such a way that it conserves stress in the entire half-space.
Change points in time series are perceived as isolated singularities where two regular trends of a given signal do not match. The detection of such transitions is of fundamental interest for the understanding of the system's internal dynamics or external forcings. In practice observational noise makes it difficult to detect such change points in time series. In this work we elaborate on a Bayesian algorithm to estimate the location of the singularities and to quantify their credibility. We validate the performance and sensitivity of our inference method by estimating change points of synthetic data sets. As an application we use our algorithm to analyze the annual flow volume of the Nile River at Aswan from 1871 to 1970, where we confirm a well-established significant transition point within the time series.
In this study we propose a Bayesian approach to the estimation of the Hurst exponent in terms of linear mixed models. Even for unevenly sampled signals and signals with gaps, our method is applicable. We test our method by using artificial fractional Brownian motion of different length and compare it with the detrended fluctuation analysis technique. The estimation of the Hurst exponent of a Rosenblatt process is shown as an example of an H-self-similar process with non-Gaussian dimensional distribution. Additionally, we perform an analysis with real data, the Dow-Jones Industrial Average closing values, and analyze its temporal variation of the Hurst exponent.
Borehole logs provide in situ information about the fluctuations of petrophysical properties with depth and thus allow the characterization of the crustal heterogeneities. A detailed investigation of these measurements may lead to extract features of the geological media. In this study, we suggest a regularity analysis based on the continuous wavelet transform to examine sonic logs data. The description of the local behavior of the logs at each depth is carried out using the local Hurst exponent estimated by two (02) approaches: the local wavelet approach and the average-local wavelet approach. Firstly, a synthetic log, generated using the random midpoints displacement algorithm, is processed by the regularity analysis. The obtained Hurst curves allowed the discernment of the different layers composing the simulated geological model. Next, this analysis is extended to real sonic logs data recorded at the Kontinentales Tiefbohrprogramm (KTB) pilot borehole (Continental Deep Drilling Program, Germany). The results show a significant correlation between the estimated Hurst exponents and the lithological discontinuities crossed by the well. Hence, the Hurst exponent can be used as a tool to characterize underground heterogeneities.
The aim of this paper is to estimate the Hurst parameter of Fractional Gaussian Noise (FGN) using Bayesian inference. We propose an estimation technique that takes into account the full correlation structure of this process. Instead of using the integrated time series and then applying an estimator for its Hurst exponent, we propose to use the noise signal directly. As an application we analyze the time series of the Nile River, where we find a posterior distribution which is compatible with previous findings. In addition, our technique provides natural error bars for the Hurst exponent.
Standing stocks are typically easier to measure than process rates such as production. Hence, stocks are often used as indicators of ecosystem functions although the latter are generally more strongly related to rates than to stocks. The regulation of stocks and rates and thus their variability over time may differ, as stocks constitute the net result of production and losses. Based on long-term high frequency measurements in a large, deep lake we explore the variability patterns in primary and bacterial production and relate them to those of the corresponding standing stocks, i.e. chlorophyll concentration, phytoplankton and bacterial biomass. We employ different methods (coefficient of variation, spline fitting and spectral analysis) which complement each other for assessing the variability present in the plankton data, at different temporal scales. In phytoplankton, we found that the overall variability of primary production is dominated by fluctuations at low frequencies, such as the annual, whereas in stocks and chlorophyll in particular, higher frequencies contribute substantially to the overall variance. This suggests that using standing stocks instead of rate measures leads to an under- or overestimation of food shortage for consumers during distinct periods of the year. The range of annual variation in bacterial production is 8 times greater than biomass, showing that the variability of bacterial activity (e.g. oxygen consumption, remineralisation) would be underestimated if biomass is used. The P/B ratios were variable and although clear trends are present in both bacteria and phytoplankton, no systematic relationship between stock and rate measures were found for the two groups. Hence, standing stock and process rate measures exhibit different variability patterns and care is needed when interpreting the mechanisms and implications of the variability encountered.
Wavelet modelling of the gravity field by domain decomposition methods: an example over Japan
(2011)
With the advent of satellite gravity, large gravity data sets of unprecedented quality at low and medium resolution become available. For local, high resolution field modelling, they need to be combined with the surface gravity data. Such models are then used for various applications, from the study of the Earth interior to the determination of oceanic currents. Here we show how to realize such a combination in a flexible way using spherical wavelets and applying a domain decomposition approach. This iterative method, based on the Schwarz algorithms, allows to split a large problem into smaller ones, and avoids the calculation of the entire normal system, which may be huge if high resolution is sought over wide areas. A subdomain is defined as the harmonic space spanned by a subset of the wavelet family. Based on the localization properties of the wavelets in space and frequency, we define hierarchical subdomains of wavelets at different scales. On each scale, blocks of subdomains are defined by using a tailored spatial splitting of the area. The data weighting and regularization are iteratively adjusted for the subdomains, which allows to handle heterogeneity in the data quality or the gravity variations. Different levels of approximations of the subdomains normals are also introduced, corresponding to building local averages of the data at different resolution levels.
We first provide the theoretical background on domain decomposition methods. Then, we validate the method with synthetic data, considering two kinds of noise: white noise and coloured noise. We then apply the method to data over Japan, where we combine a satellite-based geopotential model, EIGEN-GL04S, and a local gravity model from a combination of land and marine gravity data and an altimetry-derived marine gravity model. A hybrid spherical harmonics/wavelet model of the geoid is obtained at about 15 km resolution and a corrector grid for the surface model is derived.
We present an alarm-based earthquake forecast model that uses the early aftershock statistics (EAST). This model is based on the hypothesis that the time delay before the onset of the power-law aftershock decay rate decreases as the level of stress and the seismogenic potential increase. Here, we estimate this time delay from < t(g)>, the time constant of the Omori-Utsu law. To isolate space-time regions with a relative high level of stress, the single local variable of our forecast model is the E-a value, the ratio between the long-term and short-term estimations of < t(g)>. When and where the E-a value exceeds a given threshold (i.e., the c value is abnormally small), an alarm is issued, and an earthquake is expected to occur during the next time step. Retrospective tests show that the EAST model has better predictive power than a stationary reference model based on smoothed extrapolation of past seismicity. The official prospective test for California started on 1 July 2009 in the testing center of the Collaboratory for the Study of Earthquake Predictability (CSEP). During the first nine months, 44 M >= 4 earthquakes occurred in the testing area. For this time period, the EAST model has better predictive power than the reference model at a 1% level of significance. Because the EAST model has also a better predictive power than several time-varying clustering models tested in CSEP at a 1% level of significance, we suggest that our successful prospective results are not due only to the space-time clustering of aftershocks.
We propose a conversion method from alarm-based to rate-based earthquake forecast models. A differential probability gain g(alarm)(ref) is the absolute value of the local slope of the Molchan trajectory that evaluates the performance of the alarm-based model with respect to the chosen reference model. We consider that this differential probability gain is constant over time. Its value at each point of the testing region depends only on the alarm function value. The rate-based model is the product of the event rate of the reference model at this point multiplied by the corresponding differential probability gain. Thus, we increase or decrease the initial rates of the reference model according to the additional amount of information contained in the alarm-based model. Here, we apply this method to the Early Aftershock STatistics (EAST) model, an alarm-based model in which early aftershocks are used to identify space-time regions with a higher level of stress and, consequently, a higher seismogenic potential. The resulting rate-based model shows similar performance to the original alarm-based model for all ranges of earthquake magnitude in both retrospective and prospective tests. This conversion method offers the opportunity to perform all the standard evaluation tests of the earthquake testing centers on alarm-based models. In addition, we infer that it can also be used to consecutively combine independent forecast models and, with small modifications, seismic hazard maps with short- and medium-term forecasts.
Both aftershocks and geodetically measured postseismic displacements are important markers of the stress relaxation process following large earthquakes. Postseismic displacements can be related to creep-like relaxation in the vicinity of the coseismic rupture by means of inversion methods. However, the results of slip inversions are typically non-unique and subject to large uncertainties. Therefore, we explore the possibility to improve inversions by mechanical constraints. In particular, we take into account the physical understanding that postseismic deformation is stress-driven, and occurs in the coseismically stressed zone. We do joint inversions for coseismic and postseismic slip in a Bayesian framework in the case of the 2004 M6.0 Parkfield earthquake. We perform a number of inversions with different constraints, and calculate their statistical significance. According to information criteria, the best result is preferably related to a physically reasonable model constrained by the stress-condition (namely postseismic creep is driven by coseismic stress) and the condition that coseismic slip and large aftershocks are disjunct. This model explains 97% of the coseismic displacements and 91% of the postseismic displacements during day 1-5 following the Parkfield event, respectively. It indicates that the major postseismic deformation can be generally explained by a stress relaxation process for the Parkfield case. This result also indicates that the data to constrain the coseismic slip model could be enriched postseismically. For the 2004 Parkfield event, we additionally observe asymmetric relaxation process at the two sides of the fault, which can be explained by material contrast ratio across the fault of similar to 1.15 in seismic velocity.
We consider a model based on the fractional Brownian motion under the influence of noise. We implement the Bayesian approach to estimate the Hurst exponent of the model. The robustness of the method to the noise intensity is tested using artificial data from fractional Brownian motion. We show that estimation of the parameters achieved when noise is considered explicitly in the model. Moreover, we identify the corresponding noise-amplitude level that allow to receive the correct estimation of the Hurst exponents in various cases.
Multivariate analyses of fixation durations in reading with linear mixed and additive mixed models
(2012)
In this study we analyse the error distribution in regional models of the geomagnetic field. Our main focus is to investigate the distribution of errors when combining two regional patches to obtain a global field from regional ones. To simulate errors in overlapping patches we choose two different data region shapes that resemble that scenario. First, we investigate the errors in elliptical regions and secondly we choose a region obtained from two overlapping circular spherical caps. We conduct a Monte-Carlo simulation using synthetic data to obtain the expected mean errors. For the elliptical regions the results are similar to the ones obtained for circular spherical caps: the maximum error at the boundary decreases towards the centre of the region. A new result emerges as errors at the boundary vary with azimuth, being largest in the major axis direction and minimal in the minor axis direction. Inside the region there is an error decay towards a minimum at the centre at a rate similar to the one in circular regions. In the case of two combined circular regions there is also an error decay from the boundary towards the centre. The minimum error occurs at the centre of the combined regions. The maximum error at the boundary occurs on the line containing the two cap centres, the minimum in the perpendicular direction where the two circular cap boundaries meet. The large errors at the boundary are eliminated by combining regional patches. We propose an algorithm for finding the boundary region that is applicable to irregularly shaped model regions.
Analytical and numerical analysis of imaging mechanism of dynamic scanning electron microscopy
(2012)
The direct observation of small oscillating structures with the help of a scanning electron beam is a new approach to study the vibrational dynamics of cantilevers and microelectromechanical systems. In the scanning electron microscope, the conventional signal of secondary electrons (SE, dc part) is separated from the signal response of the SE detector, which is correlated to the respective excitation frequency for vibration by means of a lock-in amplifier. The dynamic response is separated either into images of amplitude and phase shift or into real and imaginary parts. Spatial resolution is limited to the diameter of the electron beam. The sensitivity limit to vibrational motion is estimated to be sub-nanometer for high integration times. Due to complex imaging mechanisms, a theoretical model was developed for the interpretation of the obtained measurements, relating cantilever shapes to interaction processes consisting of incident electron beam, electron-lever interaction, emitted electrons and detector response. Conclusions drawn from this new model are compared with numerical results based on the Euler-Bernoulli equation.
Different GRACE data analysis centers provide temporal variations of the Earth's gravity field as monthly, 10-daily or weekly solutions. These temporal mean fields cannot model the variations occurring during the respective time span. The aim of our approach is to extract as much temporal information as possible out of the given GRACE data. Therefore the temporal resolution shall be increased with the goal to derive daily snapshots. Yet, such an increase in temporal resolution is accompanied by a loss of redundancy and therefore in a reduced accuracy if the daily solutions are calculated individually. The approach presented here therefore introduces spatial and temporal correlations of the expected gravity field signal derived from geophysical models in addition to the daily observations, thus effectively constraining the spatial and temporal evolution of the GRACE solution. The GRACE data processing is then performed within the framework of a Kalman filter and smoother estimation procedure.
The approach is at first investigated in a closed-loop simulation scenario and then applied to the original GRACE observations (level-1B data) to calculate daily solutions as part of the gravity field model ITG-Grace2010. Finally, the daily models are compared to vertical GPS station displacements and ocean bottom pressure observations.
From these comparisons it can be concluded that particular in higher latitudes the daily solutions contain high-frequent temporal gravity field information and represent an improvement to existing geophysical models.
In the eighties, the analysis of satellite altimetry data leads to the major discovery of gravity lineations in the oceans, with wavelengths between 200 and 1400 km. While the existence of the 200 km scale undulations is widely accepted, undulations at scales larger than 400 km are still a matter of debate. In this paper, we revisit the topic of the large-scale geoid undulations over the oceans in the light of the satellite gravity data provided by the GRACE mission, considerably more precise than the altimetry data at wavelengths larger than 400 km. First, we develop a dedicated method of directional Poisson wavelet analysis on the sphere with significance testing, in order to detect and characterize directional structures in geophysical data on the sphere at different spatial scales. This method is particularly well suited for potential field analysis. We validate it on a series of synthetic tests, and then apply it to analyze recent gravity models, as well as a bathymetry data set independent from gravity. Our analysis confirms the existence of gravity undulations at large scale in the oceans, with characteristic scales between 600 and 2000 km. Their direction correlates well with present-day plate motion over the Pacific ocean, where they are particularly clear, and associated with a conjugate direction at 1500 km scale. A major finding is that the 2000 km scale geoid undulations dominate and had never been so clearly observed previously. This is due to the great precision of GRACE data at those wavelengths. Given the large scale of these undulations, they are most likely related to mantle processes. Taking into account observations and models from other geophysical information, as seismological tomography, convection and geochemical models and electrical conductivity in the mantle, we conceive that all these inputs indicate a directional fabric of the mantle flows at depth, reflecting how the history of subduction influences the organization of lower mantle upwellings.
Bayesian selection of Markov Models for symbol sequences application to microsaccadic eye movements
(2012)
Complex biological dynamics often generate sequences of discrete events which can be described as a Markov process. The order of the underlying Markovian stochastic process is fundamental for characterizing statistical dependencies within sequences. As an example for this class of biological systems, we investigate the Markov order of sequences of microsaccadic eye movements from human observers. We calculate the integrated likelihood of a given sequence for various orders of the Markov process and use this in a Bayesian framework for statistical inference on the Markov order. Our analysis shows that data from most participants are best explained by a first-order Markov process. This is compatible with recent findings of a statistical coupling of subsequent microsaccade orientations. Our method might prove to be useful for a broad class of biological systems.
In this study we re-evaluate the estimation of the self-similarity exponent of fixational eye movements using Bayesian theory. Our analysis is based on a subsampling decomposition, which permits an analysis of the signal up to some scale factor. We demonstrate that our approach can be applied to simulated data from mathematical models of fixational eye movements to distinguish the models' properties reliably.
In order to examine variations in aftershock decay rate, we propose a Bayesian framework to estimate the {K, c, p}-values of the modified Omori law (MOL), lambda(t) = K(c + t)(-p). The Bayesian setting allows not only to produce a point estimator of these three parameters but also to assess their uncertainties and posterior dependencies with respect to the observed aftershock sequences. Using a new parametrization of the MOL, we identify the trade-off between the c and p-value estimates and discuss its dependence on the number of aftershocks. Then, we analyze the influence of the catalog completeness interval [t(start), t(stop)] on the various estimates. To test this Bayesian approach on natural aftershock sequences, we use two independent and non-overlapping aftershock catalogs of the same earthquakes in Japan. Taking into account the posterior uncertainties, we show that both the handpicked (short times) and the instrumental (long times) catalogs predict the same ranges of parameter values. We therefore conclude that the same MOL may be valid over short and long times.
The dynamics of external contributions to the geomagnetic field is investigated by applying time-frequency methods to magnetic observatory data. Fractal models and multiscale analysis enable obtaining maximum quantitative information related to the short-term dynamics of the geomagnetic field activity. The stochastic properties of the horizontal component of the transient external field are determined by searching for scaling laws in the power spectra. The spectrum fits a power law with a scaling exponent beta, a typical characteristic of self-affine time-series. Local variations in the power-law exponent are investigated by applying wavelet analysis to the same time-series. These analyses highlight the self-affine properties of geomagnetic perturbations and their persistence. Moreover, they show that the main phases of sudden storm disturbances are uniquely characterized by a scaling exponent varying between 1 and 3, possibly related to the energy contained in the external field. These new findings suggest the existence of a long-range dependence, the scaling exponent being an efficient indicator of geomagnetic activity and singularity detection. These results show that by using magnetogram regularity to reflect the magnetosphere activity, a theoretical analysis of the external geomagnetic field based on local power-law exponents is possible.