Refine
Year of publication
- 2012 (60) (remove)
Document Type
- Article (28)
- Preprint (26)
- Doctoral Thesis (5)
- Other (1)
Language
- English (60)
Is part of the Bibliography
- yes (60)
Keywords
- Gibbs point processes (2)
- Heat equation (2)
- Riemannian manifold (2)
- counting process (2)
- infinite divisibility (2)
- infinitely divisible point processes (2)
- reciprocal class (2)
- 35K65 (1)
- Aerosols (1)
- Arctic (1)
Institute
- Institut für Mathematik (60) (remove)
We develop the method of Fischer-Riesz equations for general boundary value problems elliptic in the sense of Douglis-Nirenberg. To this end we reduce them to a boundary problem for a (possibly overdetermined) first order system whose classical symbol has a left inverse. For such a problem there is a uniquely determined boundary value problem which is adjoint to the given one with respect to the Green formula. On using a well elaborated theory of approximation by solutions of the adjoint problem, we find the Cauchy data of solutions of our problem.
A discrete analogue of the Witten Laplacian on the n-dimensional integer lattice is considered. After rescaling of the operator and the lattice size we analyze the tunnel effect between different wells, providing sharp asymptotics of the low-lying spectrum. Our proof, inspired by work of B. Helffer, M. Klein and F. Nier in continuous setting, is based on the construction of a discrete Witten complex and a semiclassical analysis of the corresponding discrete Witten Laplacian on 1-forms. The result can be reformulated in terms of metastable Markov processes on the lattice.
We consider compact Riemannian spin manifolds without boundary equipped with orthogonal connections. We investigate the induced Dirac operators and the associated commutative spectral triples. In case of dimension four and totally anti-symmetric torsion we compute the Chamseddine-Connes spectral action, deduce the equations of motions and discuss critical points.
We propose a conversion method from alarm-based to rate-based earthquake forecast models. A differential probability gain g(alarm)(ref) is the absolute value of the local slope of the Molchan trajectory that evaluates the performance of the alarm-based model with respect to the chosen reference model. We consider that this differential probability gain is constant over time. Its value at each point of the testing region depends only on the alarm function value. The rate-based model is the product of the event rate of the reference model at this point multiplied by the corresponding differential probability gain. Thus, we increase or decrease the initial rates of the reference model according to the additional amount of information contained in the alarm-based model. Here, we apply this method to the Early Aftershock STatistics (EAST) model, an alarm-based model in which early aftershocks are used to identify space-time regions with a higher level of stress and, consequently, a higher seismogenic potential. The resulting rate-based model shows similar performance to the original alarm-based model for all ranges of earthquake magnitude in both retrospective and prospective tests. This conversion method offers the opportunity to perform all the standard evaluation tests of the earthquake testing centers on alarm-based models. In addition, we infer that it can also be used to consecutively combine independent forecast models and, with small modifications, seismic hazard maps with short- and medium-term forecasts.
Cell-level kinetic models for therapeutically relevant processes increasingly benefit the early stages of drug development. Later stages of the drug development processes, however, rely on pharmacokinetic compartment models while cell-level dynamics are typically neglected. We here present a systematic approach to integrate cell-level kinetic models and pharmacokinetic compartment models. Incorporating target dynamics into pharmacokinetic models is especially useful for the development of therapeutic antibodies because their effect and pharmacokinetics are inherently interdependent. The approach is illustrated by analysing the F(ab)-mediated inhibitory effect of therapeutic antibodies targeting the epidermal growth factor receptor. We build a multi-level model for anti-EGFR antibodies by combining a systems biology model with in vitro determined parameters and a pharmacokinetic model based on in vivo pharmacokinetic data. Using this model, we investigated in silico the impact of biochemical properties of anti-EGFR antibodies on their F(ab)-mediated inhibitory effect. The multi-level model suggests that the F(ab)-mediated inhibitory effect saturates with increasing drug-receptor affinity, thereby limiting the impact of increasing antibody affinity on improving the effect. This indicates that observed differences in the therapeutic effects of high affinity antibodies in the market and in clinical development may result mainly from Fc-mediated indirect mechanisms such as antibody-dependent cell cytotoxicity.
We show that it is possible to approximate the zeta-function of a curve over a finite field by meromorphic functions which satisfy the same functional equation and moreover satisfy (respectively do not satisfy) an analog of the Riemann hypothesis. In the other direction, it is possible to approximate holomorphic functions by simple manipulations of such a zeta-function. No number theory is required to understand the theorems and their proofs, for it is known that the zeta-functions of curves over finite fields are very explicit meromorphic functions. We study the approximation properties of these meromorphic functions.
The authors discuss the use of the discrepancy principle for statistical inverse problems, when the underlying operator is of trace class. Under this assumption the discrepancy principle is well defined, however a plain use of it may occasionally fail and it will yield sub-optimal rates. Therefore, a modification of the discrepancy is introduced, which corrects both of the above deficiencies. For a variety of linear regularization schemes as well as for conjugate gradient iteration it is shown to yield order optimal a priori error bounds under general smoothness assumptions. A posteriori error control is also possible, however at a sub-optimal rate, in general. This study uses and complements previous results for bounded deterministic noise.
We reconsider the fundamental work of Fichtner 2 and exhibit the permanental structure of the ideal Bose gas again, using a new approach which combines a characterization of infinitely divisible random measures (due to Kerstan, Kummer and Matthes 4, 6 and Mecke 9, 10) with a decomposition of the moment measures into its factorial measures due to Krickeberg 5. To be more precise, we exhibit the moment measures of all orders of the general ideal Bose gas in terms of certain loop integrals. This representation can be considered as a point process analogue of the old idea of Symanzik 15 that local times and self-crossings of the Brownian motion can be used as a tool in quantum field theory. Behind the notion of a general ideal Bose gas there is a class of infinitely divisible point processes of all orders with a Levy-measure belonging to some large class of measures containing that of the classical ideal Bose gas considered by Fichtner. It is well-known that the calculation of moments of higher order of point processes is notoriously complicated. See for instance Krickebergs calculations for the Poisson or the Cox process in 5. Relations to the work of Shirai, Takahashi 12 and Soshnikov 14 on permanental and determinantal processes are outlined.
Let (M, g) be a complete 3-dimensional asymptotically flat manifold with everywhere positive scalar curvature. We prove that, given a compact subset K subset of M, all volume preserving stable constant mean curvature surfaces of sufficiently large area will avoid K. This complements the results of G. Huisken and S.-T. Yau [17] and of J. Qing and G. Tian [26] on the uniqueness of large volume preserving stable constant mean curvature spheres in initial data sets that are asymptotically close to Schwarzschild with mass m > 0. The analysis in [17] and [26] takes place in the asymptotic regime of M. Here we adapt ideas from the minimal surface proof of the positive mass theorem [32] by R. Schoen and S.-T. Yau and develop geometric properties of volume preserving stable constant mean curvature surfaces to handle surfaces that run through the part of M that is far from Euclidean.
Both aftershocks and geodetically measured postseismic displacements are important markers of the stress relaxation process following large earthquakes. Postseismic displacements can be related to creep-like relaxation in the vicinity of the coseismic rupture by means of inversion methods. However, the results of slip inversions are typically non-unique and subject to large uncertainties. Therefore, we explore the possibility to improve inversions by mechanical constraints. In particular, we take into account the physical understanding that postseismic deformation is stress-driven, and occurs in the coseismically stressed zone. We do joint inversions for coseismic and postseismic slip in a Bayesian framework in the case of the 2004 M6.0 Parkfield earthquake. We perform a number of inversions with different constraints, and calculate their statistical significance. According to information criteria, the best result is preferably related to a physically reasonable model constrained by the stress-condition (namely postseismic creep is driven by coseismic stress) and the condition that coseismic slip and large aftershocks are disjunct. This model explains 97% of the coseismic displacements and 91% of the postseismic displacements during day 1-5 following the Parkfield event, respectively. It indicates that the major postseismic deformation can be generally explained by a stress relaxation process for the Parkfield case. This result also indicates that the data to constrain the coseismic slip model could be enriched postseismically. For the 2004 Parkfield event, we additionally observe asymmetric relaxation process at the two sides of the fault, which can be explained by material contrast ratio across the fault of similar to 1.15 in seismic velocity.
We develop a hydrostatic Hamiltonian particle-mesh (HPM) method for efficient long-term numerical integration of the atmosphere. In the HPM method, the hydrostatic approximation is interpreted as a holonomic constraint for the vertical position of particles. This can be viewed as defining a set of vertically buoyant horizontal meshes, with the altitude of each mesh point determined so as to satisfy the hydrostatic balance condition and with particles modelling horizontal advection between the moving meshes. We implement the method in a vertical-slice model and evaluate its performance for the simulation of idealized linear and nonlinear orographic flow in both dry and moist environments. The HPM method is able to capture the basic features of the gravity wave to a degree of accuracy comparable with that reported in the literature. The numerical solution in the moist experiment indicates that the influence of moisture on wave characteristics is represented reasonably well and the reduction of momentum flux is in good agreement with theoretical analysis.
We consider quasicomplexes of pseudodifferential operators on a smooth compact manifold without boundary. To each quasicomplex we associate a complex of symbols. The quasicomplex is elliptic if this symbol complex is exact away from the zero section. We prove that elliptic quasicomplexes are Fredholm. Moreover, we introduce the Euler characteristic for elliptic quasicomplexes and prove a generalisation of the Atiyah-Singer index theorem.
The study of the semigroups OPn, of all orientation-preserving transformations on an n-element chain, and ORn, of all orientation-preserving or orientation-reversing transformations on an n-element chain, has began in [17] and [5]. In order to bring more insight into the subsemigroup structure of OPn and ORn, we characterize their maximal subsemigroups.
Clustering of gene expression data is often done with the latent aim of dimension reduction, by finding groups of genes that have a common response to potentially unknown stimuli. However, what is poorly understood to date is the behaviour of a low dimensional signal embedded in high dimensions. This paper introduces a multicollinear model which is based on random matrix theory results, and shows potential for the characterisation of a gene cluster's correlation matrix. This model projects a one dimensional signal into many dimensions and is based on the spiked covariance model, but rather characterises the behaviour of the corresponding correlation matrix. The eigenspectrum of the correlation matrix is empirically examined by simulation, under the addition of noise to the original signal. The simulation results are then used to propose a dimension estimation procedure of clusters from data. Moreover, the simulation results warn against considering pairwise correlations in isolation, as the model provides a mechanism whereby a pair of genes with 'low' correlation may simply be due to the interaction of high dimension and noise. Instead, collective information about all the variables is given by the eigenspectrum.
A partial transformation alpha on an n-element chain X-n is called order-preserving if x <= y implies x alpha <= y alpha for all x, y in the domain of alpha and it is called extensive if x <= x alpha for all x in the domain of alpha. The set of all partial order-preserving extensive transformations on X-n forms a semiband POEn. We determine the maximal subsemigroups as well as the maximal subsemibands of POEn.
In this study we analyse the error distribution in regional models of the geomagnetic field. Our main focus is to investigate the distribution of errors when combining two regional patches to obtain a global field from regional ones. To simulate errors in overlapping patches we choose two different data region shapes that resemble that scenario. First, we investigate the errors in elliptical regions and secondly we choose a region obtained from two overlapping circular spherical caps. We conduct a Monte-Carlo simulation using synthetic data to obtain the expected mean errors. For the elliptical regions the results are similar to the ones obtained for circular spherical caps: the maximum error at the boundary decreases towards the centre of the region. A new result emerges as errors at the boundary vary with azimuth, being largest in the major axis direction and minimal in the minor axis direction. Inside the region there is an error decay towards a minimum at the centre at a rate similar to the one in circular regions. In the case of two combined circular regions there is also an error decay from the boundary towards the centre. The minimum error occurs at the centre of the combined regions. The maximum error at the boundary occurs on the line containing the two cap centres, the minimum in the perpendicular direction where the two circular cap boundaries meet. The large errors at the boundary are eliminated by combining regional patches. We propose an algorithm for finding the boundary region that is applicable to irregularly shaped model regions.
The paper presents a classification of the basic types of admissible solutions of the general Friedmann equation with non-vanishing cosmological constant and for the case that radiation and matter do not couple. There are four distinct types. The classification uses first the discriminant of a polynomial of the third degree, closely related to the right hand side of the Friedmann equation. The decisive term is then a critical radiation density which can be calculated explicitly.
The Gaussian Graphical Model (GGM) is a popular tool for incorporating sparsity into joint multivariate distributions. The G-Wishart distribution, a conjugate prior for precision matrices satisfying general GGM constraints, has now been in existence for over a decade. However, due to the lack of a direct sampler, its use has been limited in hierarchical Bayesian contexts, relegating mixing over the class of GGMs mostly to situations involving standard Gaussian likelihoods. Recent work has developed methods that couple model and parameter moves, first through reversible jump methods and later by direct evaluation of conditional Bayes factors and subsequent resampling. Further, methods for avoiding prior normalizing constant calculations-a serious bottleneck and source of numerical instability-have been proposed. We review and clarify these developments and then propose a new methodology for GGM comparison that blends many recent themes. Theoretical developments and computational timing experiments reveal an algorithm that has limited computational demands and dramatically improves on computing times of existing methods. We conclude by developing a parsimonious multivariate stochastic volatility model that embeds GGM uncertainty in a larger hierarchical framework. The method is shown to be capable of adapting to swings in market volatility, offering improved calibration of predictive distributions.
The chemical master equation (CME) is the fundamental evolution equation of the stochastic description of biochemical reaction kinetics. In most applications it is impossible to solve the CME directly due to its high dimensionality. Instead, indirect approaches based on realizations of the underlying Markov jump process are used, such as the stochastic simulation algorithm (SSA). In the SSA, however, every reaction event has to be resolved explicitly such that it becomes numerically inefficient when the system's dynamics include fast reaction processes or species with high population levels. In many hybrid approaches, such fast reactions are approximated as continuous processes or replaced by quasi-stationary distributions in either a stochastic or a deterministic context. Current hybrid approaches, however, almost exclusively rely on the computation of ensembles of stochastic realizations. We present a novel hybrid stochastic-deterministic approach to solve the CME directly. Our starting point is a partitioning of the molecular species into discrete and continuous species that induces a partitioning of the reactions into discrete-stochastic and continuous-deterministic processes. The approach is based on a WKB (Wentzel-Kramers-Brillouin) ansatz for the conditional probability distribution function (PDF) of the continuous species (given a discrete state) in combination with Laplace's method of integral approximation. The resulting hybrid stochastic-deterministic evolution equations comprise a CME with averaged propensities for the PDF of the discrete species that is coupled to an evolution equation of the related expected levels of the continuous species for each discrete state. In contrast to indirect hybrid methods, the impact of the evolution of discrete species on the dynamics of the continuous species has to be taken into account explicitly. The proposed approach is efficient whenever the number of discrete molecular species is small. We illustrate the performance of the new hybrid stochastic-deterministic approach in an application to model systems of biological interest.