Refine
Has Fulltext
- no (30) (remove)
Year of publication
- 2012 (30) (remove)
Document Type
- Article (28)
- Doctoral Thesis (1)
- Other (1)
Language
- English (30)
Is part of the Bibliography
- yes (30)
Keywords
- 35K65 (1)
- Aerosols (1)
- Arctic (1)
- Arctic haze (1)
- Cell-level kinetics (1)
- Chamseddine-Connes spectral action (1)
- Commutative geometries (1)
- Daily gravity field (1)
- EGFR (1)
- Elliptic complexes (1)
Institute
- Institut für Mathematik (30) (remove)
Both aftershocks and geodetically measured postseismic displacements are important markers of the stress relaxation process following large earthquakes. Postseismic displacements can be related to creep-like relaxation in the vicinity of the coseismic rupture by means of inversion methods. However, the results of slip inversions are typically non-unique and subject to large uncertainties. Therefore, we explore the possibility to improve inversions by mechanical constraints. In particular, we take into account the physical understanding that postseismic deformation is stress-driven, and occurs in the coseismically stressed zone. We do joint inversions for coseismic and postseismic slip in a Bayesian framework in the case of the 2004 M6.0 Parkfield earthquake. We perform a number of inversions with different constraints, and calculate their statistical significance. According to information criteria, the best result is preferably related to a physically reasonable model constrained by the stress-condition (namely postseismic creep is driven by coseismic stress) and the condition that coseismic slip and large aftershocks are disjunct. This model explains 97% of the coseismic displacements and 91% of the postseismic displacements during day 1-5 following the Parkfield event, respectively. It indicates that the major postseismic deformation can be generally explained by a stress relaxation process for the Parkfield case. This result also indicates that the data to constrain the coseismic slip model could be enriched postseismically. For the 2004 Parkfield event, we additionally observe asymmetric relaxation process at the two sides of the fault, which can be explained by material contrast ratio across the fault of similar to 1.15 in seismic velocity.
We develop a hydrostatic Hamiltonian particle-mesh (HPM) method for efficient long-term numerical integration of the atmosphere. In the HPM method, the hydrostatic approximation is interpreted as a holonomic constraint for the vertical position of particles. This can be viewed as defining a set of vertically buoyant horizontal meshes, with the altitude of each mesh point determined so as to satisfy the hydrostatic balance condition and with particles modelling horizontal advection between the moving meshes. We implement the method in a vertical-slice model and evaluate its performance for the simulation of idealized linear and nonlinear orographic flow in both dry and moist environments. The HPM method is able to capture the basic features of the gravity wave to a degree of accuracy comparable with that reported in the literature. The numerical solution in the moist experiment indicates that the influence of moisture on wave characteristics is represented reasonably well and the reduction of momentum flux is in good agreement with theoretical analysis.
We consider quasicomplexes of pseudodifferential operators on a smooth compact manifold without boundary. To each quasicomplex we associate a complex of symbols. The quasicomplex is elliptic if this symbol complex is exact away from the zero section. We prove that elliptic quasicomplexes are Fredholm. Moreover, we introduce the Euler characteristic for elliptic quasicomplexes and prove a generalisation of the Atiyah-Singer index theorem.
The study of the semigroups OPn, of all orientation-preserving transformations on an n-element chain, and ORn, of all orientation-preserving or orientation-reversing transformations on an n-element chain, has began in [17] and [5]. In order to bring more insight into the subsemigroup structure of OPn and ORn, we characterize their maximal subsemigroups.
Clustering of gene expression data is often done with the latent aim of dimension reduction, by finding groups of genes that have a common response to potentially unknown stimuli. However, what is poorly understood to date is the behaviour of a low dimensional signal embedded in high dimensions. This paper introduces a multicollinear model which is based on random matrix theory results, and shows potential for the characterisation of a gene cluster's correlation matrix. This model projects a one dimensional signal into many dimensions and is based on the spiked covariance model, but rather characterises the behaviour of the corresponding correlation matrix. The eigenspectrum of the correlation matrix is empirically examined by simulation, under the addition of noise to the original signal. The simulation results are then used to propose a dimension estimation procedure of clusters from data. Moreover, the simulation results warn against considering pairwise correlations in isolation, as the model provides a mechanism whereby a pair of genes with 'low' correlation may simply be due to the interaction of high dimension and noise. Instead, collective information about all the variables is given by the eigenspectrum.
A partial transformation alpha on an n-element chain X-n is called order-preserving if x <= y implies x alpha <= y alpha for all x, y in the domain of alpha and it is called extensive if x <= x alpha for all x in the domain of alpha. The set of all partial order-preserving extensive transformations on X-n forms a semiband POEn. We determine the maximal subsemigroups as well as the maximal subsemibands of POEn.
In this study we analyse the error distribution in regional models of the geomagnetic field. Our main focus is to investigate the distribution of errors when combining two regional patches to obtain a global field from regional ones. To simulate errors in overlapping patches we choose two different data region shapes that resemble that scenario. First, we investigate the errors in elliptical regions and secondly we choose a region obtained from two overlapping circular spherical caps. We conduct a Monte-Carlo simulation using synthetic data to obtain the expected mean errors. For the elliptical regions the results are similar to the ones obtained for circular spherical caps: the maximum error at the boundary decreases towards the centre of the region. A new result emerges as errors at the boundary vary with azimuth, being largest in the major axis direction and minimal in the minor axis direction. Inside the region there is an error decay towards a minimum at the centre at a rate similar to the one in circular regions. In the case of two combined circular regions there is also an error decay from the boundary towards the centre. The minimum error occurs at the centre of the combined regions. The maximum error at the boundary occurs on the line containing the two cap centres, the minimum in the perpendicular direction where the two circular cap boundaries meet. The large errors at the boundary are eliminated by combining regional patches. We propose an algorithm for finding the boundary region that is applicable to irregularly shaped model regions.
The paper presents a classification of the basic types of admissible solutions of the general Friedmann equation with non-vanishing cosmological constant and for the case that radiation and matter do not couple. There are four distinct types. The classification uses first the discriminant of a polynomial of the third degree, closely related to the right hand side of the Friedmann equation. The decisive term is then a critical radiation density which can be calculated explicitly.
The Gaussian Graphical Model (GGM) is a popular tool for incorporating sparsity into joint multivariate distributions. The G-Wishart distribution, a conjugate prior for precision matrices satisfying general GGM constraints, has now been in existence for over a decade. However, due to the lack of a direct sampler, its use has been limited in hierarchical Bayesian contexts, relegating mixing over the class of GGMs mostly to situations involving standard Gaussian likelihoods. Recent work has developed methods that couple model and parameter moves, first through reversible jump methods and later by direct evaluation of conditional Bayes factors and subsequent resampling. Further, methods for avoiding prior normalizing constant calculations-a serious bottleneck and source of numerical instability-have been proposed. We review and clarify these developments and then propose a new methodology for GGM comparison that blends many recent themes. Theoretical developments and computational timing experiments reveal an algorithm that has limited computational demands and dramatically improves on computing times of existing methods. We conclude by developing a parsimonious multivariate stochastic volatility model that embeds GGM uncertainty in a larger hierarchical framework. The method is shown to be capable of adapting to swings in market volatility, offering improved calibration of predictive distributions.
The chemical master equation (CME) is the fundamental evolution equation of the stochastic description of biochemical reaction kinetics. In most applications it is impossible to solve the CME directly due to its high dimensionality. Instead, indirect approaches based on realizations of the underlying Markov jump process are used, such as the stochastic simulation algorithm (SSA). In the SSA, however, every reaction event has to be resolved explicitly such that it becomes numerically inefficient when the system's dynamics include fast reaction processes or species with high population levels. In many hybrid approaches, such fast reactions are approximated as continuous processes or replaced by quasi-stationary distributions in either a stochastic or a deterministic context. Current hybrid approaches, however, almost exclusively rely on the computation of ensembles of stochastic realizations. We present a novel hybrid stochastic-deterministic approach to solve the CME directly. Our starting point is a partitioning of the molecular species into discrete and continuous species that induces a partitioning of the reactions into discrete-stochastic and continuous-deterministic processes. The approach is based on a WKB (Wentzel-Kramers-Brillouin) ansatz for the conditional probability distribution function (PDF) of the continuous species (given a discrete state) in combination with Laplace's method of integral approximation. The resulting hybrid stochastic-deterministic evolution equations comprise a CME with averaged propensities for the PDF of the discrete species that is coupled to an evolution equation of the related expected levels of the continuous species for each discrete state. In contrast to indirect hybrid methods, the impact of the evolution of discrete species on the dynamics of the continuous species has to be taken into account explicitly. The proposed approach is efficient whenever the number of discrete molecular species is small. We illustrate the performance of the new hybrid stochastic-deterministic approach in an application to model systems of biological interest.