Refine
Has Fulltext
- no (28)
Year of publication
- 2012 (28) (remove)
Document Type
- Article (28) (remove)
Language
- English (28)
Is part of the Bibliography
- yes (28) (remove)
Keywords
- 35K65 (1)
- Aerosols (1)
- Arctic (1)
- Arctic haze (1)
- Cell-level kinetics (1)
- Chamseddine-Connes spectral action (1)
- Commutative geometries (1)
- Daily gravity field (1)
- EGFR (1)
- Elliptic complexes (1)
Institute
- Institut für Mathematik (28) (remove)
The paper presents a classification of the basic types of admissible solutions of the general Friedmann equation with non-vanishing cosmological constant and for the case that radiation and matter do not couple. There are four distinct types. The classification uses first the discriminant of a polynomial of the third degree, closely related to the right hand side of the Friedmann equation. The decisive term is then a critical radiation density which can be calculated explicitly.
The ensemble Kalman filter has emerged as a promising filter algorithm for nonlinear differential equations subject to intermittent observations. In this paper, we extend the well-known Kalman-Bucy filter for linear differential equations subject to continous observations to the ensemble setting and nonlinear differential equations. The proposed filter is called the ensemble Kalman-Bucy filter and its performance is demonstrated for a simple mechanical model (Langevin dynamics) subject to incremental observations of its velocity.
Bayesian selection of Markov Models for symbol sequences application to microsaccadic eye movements
(2012)
Complex biological dynamics often generate sequences of discrete events which can be described as a Markov process. The order of the underlying Markovian stochastic process is fundamental for characterizing statistical dependencies within sequences. As an example for this class of biological systems, we investigate the Markov order of sequences of microsaccadic eye movements from human observers. We calculate the integrated likelihood of a given sequence for various orders of the Markov process and use this in a Bayesian framework for statistical inference on the Markov order. Our analysis shows that data from most participants are best explained by a first-order Markov process. This is compatible with recent findings of a statistical coupling of subsequent microsaccade orientations. Our method might prove to be useful for a broad class of biological systems.
The authors discuss the use of the discrepancy principle for statistical inverse problems, when the underlying operator is of trace class. Under this assumption the discrepancy principle is well defined, however a plain use of it may occasionally fail and it will yield sub-optimal rates. Therefore, a modification of the discrepancy is introduced, which corrects both of the above deficiencies. For a variety of linear regularization schemes as well as for conjugate gradient iteration it is shown to yield order optimal a priori error bounds under general smoothness assumptions. A posteriori error control is also possible, however at a sub-optimal rate, in general. This study uses and complements previous results for bounded deterministic noise.
The Gaussian Graphical Model (GGM) is a popular tool for incorporating sparsity into joint multivariate distributions. The G-Wishart distribution, a conjugate prior for precision matrices satisfying general GGM constraints, has now been in existence for over a decade. However, due to the lack of a direct sampler, its use has been limited in hierarchical Bayesian contexts, relegating mixing over the class of GGMs mostly to situations involving standard Gaussian likelihoods. Recent work has developed methods that couple model and parameter moves, first through reversible jump methods and later by direct evaluation of conditional Bayes factors and subsequent resampling. Further, methods for avoiding prior normalizing constant calculations-a serious bottleneck and source of numerical instability-have been proposed. We review and clarify these developments and then propose a new methodology for GGM comparison that blends many recent themes. Theoretical developments and computational timing experiments reveal an algorithm that has limited computational demands and dramatically improves on computing times of existing methods. We conclude by developing a parsimonious multivariate stochastic volatility model that embeds GGM uncertainty in a larger hierarchical framework. The method is shown to be capable of adapting to swings in market volatility, offering improved calibration of predictive distributions.
The study of the semigroups OPn, of all orientation-preserving transformations on an n-element chain, and ORn, of all orientation-preserving or orientation-reversing transformations on an n-element chain, has began in [17] and [5]. In order to bring more insight into the subsemigroup structure of OPn and ORn, we characterize their maximal subsemigroups.
A partial transformation alpha on an n-element chain X-n is called order-preserving if x <= y implies x alpha <= y alpha for all x, y in the domain of alpha and it is called extensive if x <= x alpha for all x in the domain of alpha. The set of all partial order-preserving extensive transformations on X-n forms a semiband POEn. We determine the maximal subsemigroups as well as the maximal subsemibands of POEn.
Let (M, g) be a complete 3-dimensional asymptotically flat manifold with everywhere positive scalar curvature. We prove that, given a compact subset K subset of M, all volume preserving stable constant mean curvature surfaces of sufficiently large area will avoid K. This complements the results of G. Huisken and S.-T. Yau [17] and of J. Qing and G. Tian [26] on the uniqueness of large volume preserving stable constant mean curvature spheres in initial data sets that are asymptotically close to Schwarzschild with mass m > 0. The analysis in [17] and [26] takes place in the asymptotic regime of M. Here we adapt ideas from the minimal surface proof of the positive mass theorem [32] by R. Schoen and S.-T. Yau and develop geometric properties of volume preserving stable constant mean curvature surfaces to handle surfaces that run through the part of M that is far from Euclidean.
Clustering of gene expression data is often done with the latent aim of dimension reduction, by finding groups of genes that have a common response to potentially unknown stimuli. However, what is poorly understood to date is the behaviour of a low dimensional signal embedded in high dimensions. This paper introduces a multicollinear model which is based on random matrix theory results, and shows potential for the characterisation of a gene cluster's correlation matrix. This model projects a one dimensional signal into many dimensions and is based on the spiked covariance model, but rather characterises the behaviour of the corresponding correlation matrix. The eigenspectrum of the correlation matrix is empirically examined by simulation, under the addition of noise to the original signal. The simulation results are then used to propose a dimension estimation procedure of clusters from data. Moreover, the simulation results warn against considering pairwise correlations in isolation, as the model provides a mechanism whereby a pair of genes with 'low' correlation may simply be due to the interaction of high dimension and noise. Instead, collective information about all the variables is given by the eigenspectrum.
We show that it is possible to approximate the zeta-function of a curve over a finite field by meromorphic functions which satisfy the same functional equation and moreover satisfy (respectively do not satisfy) an analog of the Riemann hypothesis. In the other direction, it is possible to approximate holomorphic functions by simple manipulations of such a zeta-function. No number theory is required to understand the theorems and their proofs, for it is known that the zeta-functions of curves over finite fields are very explicit meromorphic functions. We study the approximation properties of these meromorphic functions.
In the eighties, the analysis of satellite altimetry data leads to the major discovery of gravity lineations in the oceans, with wavelengths between 200 and 1400 km. While the existence of the 200 km scale undulations is widely accepted, undulations at scales larger than 400 km are still a matter of debate. In this paper, we revisit the topic of the large-scale geoid undulations over the oceans in the light of the satellite gravity data provided by the GRACE mission, considerably more precise than the altimetry data at wavelengths larger than 400 km. First, we develop a dedicated method of directional Poisson wavelet analysis on the sphere with significance testing, in order to detect and characterize directional structures in geophysical data on the sphere at different spatial scales. This method is particularly well suited for potential field analysis. We validate it on a series of synthetic tests, and then apply it to analyze recent gravity models, as well as a bathymetry data set independent from gravity. Our analysis confirms the existence of gravity undulations at large scale in the oceans, with characteristic scales between 600 and 2000 km. Their direction correlates well with present-day plate motion over the Pacific ocean, where they are particularly clear, and associated with a conjugate direction at 1500 km scale. A major finding is that the 2000 km scale geoid undulations dominate and had never been so clearly observed previously. This is due to the great precision of GRACE data at those wavelengths. Given the large scale of these undulations, they are most likely related to mantle processes. Taking into account observations and models from other geophysical information, as seismological tomography, convection and geochemical models and electrical conductivity in the mantle, we conceive that all these inputs indicate a directional fabric of the mantle flows at depth, reflecting how the history of subduction influences the organization of lower mantle upwellings.
In this work, a closure experiment for tropospheric aerosol is presented. Aerosol size distributions and single scattering albedo from remote sensing data are compared to those measured in-situ. An aerosol pollution event on 4 April 2009 was observed by ground based and airborne lidar and photometer in and around Ny-Alesund, Spitsbergen, as well as by DMPS, nephelometer and particle soot absorption photometer at the nearby Zeppelin Mountain Research Station.
The presented measurements were conducted in an area of 40 x 20 km around Ny-Alesund as part of the 2009 Polar Airborne Measurements and Arctic Regional Climate Model Simulation Project (PAMARCMiP). Aerosol mainly in the accumulation mode was found in the lower troposphere, however, enhanced backscattering was observed up to the tropopause altitude. A comparison of meteorological data available at different locations reveals a stable multi-layer-structure of the lower troposphere. It is followed by the retrieval of optical and microphysical aerosol parameters. Extinction values have been derived using two different methods, and it was found that extinction (especially in the UV) derived from Raman lidar data significantly surpasses the extinction derived from photometer AOD profiles. Airborne lidar data shows volume depolarization values to be less than 2.5% between 500 m and 2.5 km altitude, hence, particles in this range can be assumed to be of spherical shape. In-situ particle number concentrations measured at the Zeppelin Mountain Research Station at 474 m altitude peak at about 0.18 mu m diameter, which was also found for the microphysical inversion calculations performed at 850 m and 1500 m altitude. Number concentrations depend on the assumed extinction values, and slightly decrease with altitude as well as the effective particle diameter. A low imaginary part in the derived refractive index suggests weakly absorbing aerosols, which is confirmed by low black carbon concentrations, measured at the Zeppelin Mountain as well as on board the Polar 5 aircraft.
In order to examine variations in aftershock decay rate, we propose a Bayesian framework to estimate the {K, c, p}-values of the modified Omori law (MOL), lambda(t) = K(c + t)(-p). The Bayesian setting allows not only to produce a point estimator of these three parameters but also to assess their uncertainties and posterior dependencies with respect to the observed aftershock sequences. Using a new parametrization of the MOL, we identify the trade-off between the c and p-value estimates and discuss its dependence on the number of aftershocks. Then, we analyze the influence of the catalog completeness interval [t(start), t(stop)] on the various estimates. To test this Bayesian approach on natural aftershock sequences, we use two independent and non-overlapping aftershock catalogs of the same earthquakes in Japan. Taking into account the posterior uncertainties, we show that both the handpicked (short times) and the instrumental (long times) catalogs predict the same ranges of parameter values. We therefore conclude that the same MOL may be valid over short and long times.
The principal object in noncommutative geometry is the spectral triple consisting of an algebra A, a Hilbert space H and a Dirac operator D. Field theories are incorporated in this approach by the spectral action principle, which sets the field theory action to Tr f (D-2/Lambda(2)), where f is a real function such that the trace exists and Lambda is a cutoff scale. In the low-energy (weak-field) limit, the spectral action reproduces reasonably well the known physics including the standard model. However, not much is known about the spectral action beyond the low-energy approximation. In this paper, after an extensive introduction to spectral triples and spectral actions, we study various expansions of the spectral actions (exemplified by the heat kernel). We derive the convergence criteria. For a commutative spectral triple, we compute the heat kernel on the torus up to the second order in gauge connection and consider limiting cases.
This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical in honour of Stuart Dowker's 75th birthday devoted to 'Applications of zeta functions and other spectral functions in mathematics and physics'.
In this work, we consider the reversible reaction between reactants of species A and B to form the product C. We consider this reaction as a prototype of many pseudobiomolecular reactions in biology, such as for instance molecular motors. We derive the exact probability density for the stochastic waiting time that a molecule of species A needs until the reaction with a molecule of species B takes place. We perform this computation taking fully into account the stochastic fluctuations in the number of molecules of species B. We show that at low numbers of participating molecules, the exact probability density differs from the exponential density derived by assuming the law of mass action. Finally, we discuss the condition of detailed balance in the exact stochastic and in the approximate treatment.
We analyze a general class of difference operators on where is a multi-well potential and is a small parameter. We decouple the wells by introducing certain Dirichlet operators on regions containing only one potential well, and we shall treat the eigenvalue problem for as a small perturbation of these comparison problems. We describe tunneling by a certain interaction matrix, similar to the analysis for the Schrodinger operator [see Helffer and Sjostrand in Commun Partial Differ Equ 9:337-408, 1984], and estimate the remainder, which is exponentially small and roughly quadratic compared with the interaction matrix.
Cell-level kinetic models for therapeutically relevant processes increasingly benefit the early stages of drug development. Later stages of the drug development processes, however, rely on pharmacokinetic compartment models while cell-level dynamics are typically neglected. We here present a systematic approach to integrate cell-level kinetic models and pharmacokinetic compartment models. Incorporating target dynamics into pharmacokinetic models is especially useful for the development of therapeutic antibodies because their effect and pharmacokinetics are inherently interdependent. The approach is illustrated by analysing the F(ab)-mediated inhibitory effect of therapeutic antibodies targeting the epidermal growth factor receptor. We build a multi-level model for anti-EGFR antibodies by combining a systems biology model with in vitro determined parameters and a pharmacokinetic model based on in vivo pharmacokinetic data. Using this model, we investigated in silico the impact of biochemical properties of anti-EGFR antibodies on their F(ab)-mediated inhibitory effect. The multi-level model suggests that the F(ab)-mediated inhibitory effect saturates with increasing drug-receptor affinity, thereby limiting the impact of increasing antibody affinity on improving the effect. This indicates that observed differences in the therapeutic effects of high affinity antibodies in the market and in clinical development may result mainly from Fc-mediated indirect mechanisms such as antibody-dependent cell cytotoxicity.
Different GRACE data analysis centers provide temporal variations of the Earth's gravity field as monthly, 10-daily or weekly solutions. These temporal mean fields cannot model the variations occurring during the respective time span. The aim of our approach is to extract as much temporal information as possible out of the given GRACE data. Therefore the temporal resolution shall be increased with the goal to derive daily snapshots. Yet, such an increase in temporal resolution is accompanied by a loss of redundancy and therefore in a reduced accuracy if the daily solutions are calculated individually. The approach presented here therefore introduces spatial and temporal correlations of the expected gravity field signal derived from geophysical models in addition to the daily observations, thus effectively constraining the spatial and temporal evolution of the GRACE solution. The GRACE data processing is then performed within the framework of a Kalman filter and smoother estimation procedure.
The approach is at first investigated in a closed-loop simulation scenario and then applied to the original GRACE observations (level-1B data) to calculate daily solutions as part of the gravity field model ITG-Grace2010. Finally, the daily models are compared to vertical GPS station displacements and ocean bottom pressure observations.
From these comparisons it can be concluded that particular in higher latitudes the daily solutions contain high-frequent temporal gravity field information and represent an improvement to existing geophysical models.
The chemical master equation (CME) is the fundamental evolution equation of the stochastic description of biochemical reaction kinetics. In most applications it is impossible to solve the CME directly due to its high dimensionality. Instead, indirect approaches based on realizations of the underlying Markov jump process are used, such as the stochastic simulation algorithm (SSA). In the SSA, however, every reaction event has to be resolved explicitly such that it becomes numerically inefficient when the system's dynamics include fast reaction processes or species with high population levels. In many hybrid approaches, such fast reactions are approximated as continuous processes or replaced by quasi-stationary distributions in either a stochastic or a deterministic context. Current hybrid approaches, however, almost exclusively rely on the computation of ensembles of stochastic realizations. We present a novel hybrid stochastic-deterministic approach to solve the CME directly. Our starting point is a partitioning of the molecular species into discrete and continuous species that induces a partitioning of the reactions into discrete-stochastic and continuous-deterministic processes. The approach is based on a WKB (Wentzel-Kramers-Brillouin) ansatz for the conditional probability distribution function (PDF) of the continuous species (given a discrete state) in combination with Laplace's method of integral approximation. The resulting hybrid stochastic-deterministic evolution equations comprise a CME with averaged propensities for the PDF of the discrete species that is coupled to an evolution equation of the related expected levels of the continuous species for each discrete state. In contrast to indirect hybrid methods, the impact of the evolution of discrete species on the dynamics of the continuous species has to be taken into account explicitly. The proposed approach is efficient whenever the number of discrete molecular species is small. We illustrate the performance of the new hybrid stochastic-deterministic approach in an application to model systems of biological interest.
We reconsider the fundamental work of Fichtner 2 and exhibit the permanental structure of the ideal Bose gas again, using a new approach which combines a characterization of infinitely divisible random measures (due to Kerstan, Kummer and Matthes 4, 6 and Mecke 9, 10) with a decomposition of the moment measures into its factorial measures due to Krickeberg 5. To be more precise, we exhibit the moment measures of all orders of the general ideal Bose gas in terms of certain loop integrals. This representation can be considered as a point process analogue of the old idea of Symanzik 15 that local times and self-crossings of the Brownian motion can be used as a tool in quantum field theory. Behind the notion of a general ideal Bose gas there is a class of infinitely divisible point processes of all orders with a Levy-measure belonging to some large class of measures containing that of the classical ideal Bose gas considered by Fichtner. It is well-known that the calculation of moments of higher order of point processes is notoriously complicated. See for instance Krickebergs calculations for the Poisson or the Cox process in 5. Relations to the work of Shirai, Takahashi 12 and Soshnikov 14 on permanental and determinantal processes are outlined.