Refine
Has Fulltext
- no (28)
Year of publication
- 2012 (28) (remove)
Document Type
- Article (28) (remove)
Language
- English (28) (remove)
Is part of the Bibliography
- yes (28)
Keywords
- 35K65 (1)
- Aerosols (1)
- Arctic (1)
- Arctic haze (1)
- Cell-level kinetics (1)
- Chamseddine-Connes spectral action (1)
- Commutative geometries (1)
- Daily gravity field (1)
- EGFR (1)
- Elliptic complexes (1)
Institute
- Institut für Mathematik (28) (remove)
We generalize the popular ensemble Kalman filter to an ensemble transform filter, in which the prior distribution can take the form of a Gaussian mixture or a Gaussian kernel density estimator. The design of the filter is based on a continuous formulation of the Bayesian filter analysis step. We call the new filter algorithm the ensemble Gaussian-mixture filter (EGMF). The EGMF is implemented for three simple test problems (Brownian dynamics in one dimension, Langevin dynamics in two dimensions and the three-dimensional Lorenz-63 model). It is demonstrated that the EGMF is capable of tracking systems with non-Gaussian uni- and multimodal ensemble distributions.
A partial transformation alpha on an n-element chain X-n is called order-preserving if x <= y implies x alpha <= y alpha for all x, y in the domain of alpha and it is called extensive if x <= x alpha for all x in the domain of alpha. The set of all partial order-preserving extensive transformations on X-n forms a semiband POEn. We determine the maximal subsemigroups as well as the maximal subsemibands of POEn.
Clustering of gene expression data is often done with the latent aim of dimension reduction, by finding groups of genes that have a common response to potentially unknown stimuli. However, what is poorly understood to date is the behaviour of a low dimensional signal embedded in high dimensions. This paper introduces a multicollinear model which is based on random matrix theory results, and shows potential for the characterisation of a gene cluster's correlation matrix. This model projects a one dimensional signal into many dimensions and is based on the spiked covariance model, but rather characterises the behaviour of the corresponding correlation matrix. The eigenspectrum of the correlation matrix is empirically examined by simulation, under the addition of noise to the original signal. The simulation results are then used to propose a dimension estimation procedure of clusters from data. Moreover, the simulation results warn against considering pairwise correlations in isolation, as the model provides a mechanism whereby a pair of genes with 'low' correlation may simply be due to the interaction of high dimension and noise. Instead, collective information about all the variables is given by the eigenspectrum.
The chemical master equation (CME) is the fundamental evolution equation of the stochastic description of biochemical reaction kinetics. In most applications it is impossible to solve the CME directly due to its high dimensionality. Instead, indirect approaches based on realizations of the underlying Markov jump process are used, such as the stochastic simulation algorithm (SSA). In the SSA, however, every reaction event has to be resolved explicitly such that it becomes numerically inefficient when the system's dynamics include fast reaction processes or species with high population levels. In many hybrid approaches, such fast reactions are approximated as continuous processes or replaced by quasi-stationary distributions in either a stochastic or a deterministic context. Current hybrid approaches, however, almost exclusively rely on the computation of ensembles of stochastic realizations. We present a novel hybrid stochastic-deterministic approach to solve the CME directly. Our starting point is a partitioning of the molecular species into discrete and continuous species that induces a partitioning of the reactions into discrete-stochastic and continuous-deterministic processes. The approach is based on a WKB (Wentzel-Kramers-Brillouin) ansatz for the conditional probability distribution function (PDF) of the continuous species (given a discrete state) in combination with Laplace's method of integral approximation. The resulting hybrid stochastic-deterministic evolution equations comprise a CME with averaged propensities for the PDF of the discrete species that is coupled to an evolution equation of the related expected levels of the continuous species for each discrete state. In contrast to indirect hybrid methods, the impact of the evolution of discrete species on the dynamics of the continuous species has to be taken into account explicitly. The proposed approach is efficient whenever the number of discrete molecular species is small. We illustrate the performance of the new hybrid stochastic-deterministic approach in an application to model systems of biological interest.
The Gaussian Graphical Model (GGM) is a popular tool for incorporating sparsity into joint multivariate distributions. The G-Wishart distribution, a conjugate prior for precision matrices satisfying general GGM constraints, has now been in existence for over a decade. However, due to the lack of a direct sampler, its use has been limited in hierarchical Bayesian contexts, relegating mixing over the class of GGMs mostly to situations involving standard Gaussian likelihoods. Recent work has developed methods that couple model and parameter moves, first through reversible jump methods and later by direct evaluation of conditional Bayes factors and subsequent resampling. Further, methods for avoiding prior normalizing constant calculations-a serious bottleneck and source of numerical instability-have been proposed. We review and clarify these developments and then propose a new methodology for GGM comparison that blends many recent themes. Theoretical developments and computational timing experiments reveal an algorithm that has limited computational demands and dramatically improves on computing times of existing methods. We conclude by developing a parsimonious multivariate stochastic volatility model that embeds GGM uncertainty in a larger hierarchical framework. The method is shown to be capable of adapting to swings in market volatility, offering improved calibration of predictive distributions.
We propose a conversion method from alarm-based to rate-based earthquake forecast models. A differential probability gain g(alarm)(ref) is the absolute value of the local slope of the Molchan trajectory that evaluates the performance of the alarm-based model with respect to the chosen reference model. We consider that this differential probability gain is constant over time. Its value at each point of the testing region depends only on the alarm function value. The rate-based model is the product of the event rate of the reference model at this point multiplied by the corresponding differential probability gain. Thus, we increase or decrease the initial rates of the reference model according to the additional amount of information contained in the alarm-based model. Here, we apply this method to the Early Aftershock STatistics (EAST) model, an alarm-based model in which early aftershocks are used to identify space-time regions with a higher level of stress and, consequently, a higher seismogenic potential. The resulting rate-based model shows similar performance to the original alarm-based model for all ranges of earthquake magnitude in both retrospective and prospective tests. This conversion method offers the opportunity to perform all the standard evaluation tests of the earthquake testing centers on alarm-based models. In addition, we infer that it can also be used to consecutively combine independent forecast models and, with small modifications, seismic hazard maps with short- and medium-term forecasts.
We consider compact Riemannian spin manifolds without boundary equipped with orthogonal connections. We investigate the induced Dirac operators and the associated commutative spectral triples. In case of dimension four and totally anti-symmetric torsion we compute the Chamseddine-Connes spectral action, deduce the equations of motions and discuss critical points.
We show that it is possible to approximate the zeta-function of a curve over a finite field by meromorphic functions which satisfy the same functional equation and moreover satisfy (respectively do not satisfy) an analog of the Riemann hypothesis. In the other direction, it is possible to approximate holomorphic functions by simple manipulations of such a zeta-function. No number theory is required to understand the theorems and their proofs, for it is known that the zeta-functions of curves over finite fields are very explicit meromorphic functions. We study the approximation properties of these meromorphic functions.
Cell-level kinetic models for therapeutically relevant processes increasingly benefit the early stages of drug development. Later stages of the drug development processes, however, rely on pharmacokinetic compartment models while cell-level dynamics are typically neglected. We here present a systematic approach to integrate cell-level kinetic models and pharmacokinetic compartment models. Incorporating target dynamics into pharmacokinetic models is especially useful for the development of therapeutic antibodies because their effect and pharmacokinetics are inherently interdependent. The approach is illustrated by analysing the F(ab)-mediated inhibitory effect of therapeutic antibodies targeting the epidermal growth factor receptor. We build a multi-level model for anti-EGFR antibodies by combining a systems biology model with in vitro determined parameters and a pharmacokinetic model based on in vivo pharmacokinetic data. Using this model, we investigated in silico the impact of biochemical properties of anti-EGFR antibodies on their F(ab)-mediated inhibitory effect. The multi-level model suggests that the F(ab)-mediated inhibitory effect saturates with increasing drug-receptor affinity, thereby limiting the impact of increasing antibody affinity on improving the effect. This indicates that observed differences in the therapeutic effects of high affinity antibodies in the market and in clinical development may result mainly from Fc-mediated indirect mechanisms such as antibody-dependent cell cytotoxicity.
Retrieval of aerosol extinction coefficient profiles from Raman lidar data by inversion method
(2012)
We regard the problem of differentiation occurring in the retrieval of aerosol extinction coefficient profiles from inelastic Raman lidar signals by searching for a stable solution of the resulting Volterra integral equation. An algorithm based on a projection method and iterative regularization together with the L-curve method has been performed on synthetic and measured lidar signals. A strategy to choose a suitable range for the integration within the framework of the retrieval of optical properties is proposed here for the first time to our knowledge. The Monte Carlo procedure has been adapted to treat the uncertainty in the retrieval of extinction coefficients.
We reconsider the fundamental work of Fichtner 2 and exhibit the permanental structure of the ideal Bose gas again, using a new approach which combines a characterization of infinitely divisible random measures (due to Kerstan, Kummer and Matthes 4, 6 and Mecke 9, 10) with a decomposition of the moment measures into its factorial measures due to Krickeberg 5. To be more precise, we exhibit the moment measures of all orders of the general ideal Bose gas in terms of certain loop integrals. This representation can be considered as a point process analogue of the old idea of Symanzik 15 that local times and self-crossings of the Brownian motion can be used as a tool in quantum field theory. Behind the notion of a general ideal Bose gas there is a class of infinitely divisible point processes of all orders with a Levy-measure belonging to some large class of measures containing that of the classical ideal Bose gas considered by Fichtner. It is well-known that the calculation of moments of higher order of point processes is notoriously complicated. See for instance Krickebergs calculations for the Poisson or the Cox process in 5. Relations to the work of Shirai, Takahashi 12 and Soshnikov 14 on permanental and determinantal processes are outlined.
Let (M, g) be a complete 3-dimensional asymptotically flat manifold with everywhere positive scalar curvature. We prove that, given a compact subset K subset of M, all volume preserving stable constant mean curvature surfaces of sufficiently large area will avoid K. This complements the results of G. Huisken and S.-T. Yau [17] and of J. Qing and G. Tian [26] on the uniqueness of large volume preserving stable constant mean curvature spheres in initial data sets that are asymptotically close to Schwarzschild with mass m > 0. The analysis in [17] and [26] takes place in the asymptotic regime of M. Here we adapt ideas from the minimal surface proof of the positive mass theorem [32] by R. Schoen and S.-T. Yau and develop geometric properties of volume preserving stable constant mean curvature surfaces to handle surfaces that run through the part of M that is far from Euclidean.
In this work, a closure experiment for tropospheric aerosol is presented. Aerosol size distributions and single scattering albedo from remote sensing data are compared to those measured in-situ. An aerosol pollution event on 4 April 2009 was observed by ground based and airborne lidar and photometer in and around Ny-Alesund, Spitsbergen, as well as by DMPS, nephelometer and particle soot absorption photometer at the nearby Zeppelin Mountain Research Station.
The presented measurements were conducted in an area of 40 x 20 km around Ny-Alesund as part of the 2009 Polar Airborne Measurements and Arctic Regional Climate Model Simulation Project (PAMARCMiP). Aerosol mainly in the accumulation mode was found in the lower troposphere, however, enhanced backscattering was observed up to the tropopause altitude. A comparison of meteorological data available at different locations reveals a stable multi-layer-structure of the lower troposphere. It is followed by the retrieval of optical and microphysical aerosol parameters. Extinction values have been derived using two different methods, and it was found that extinction (especially in the UV) derived from Raman lidar data significantly surpasses the extinction derived from photometer AOD profiles. Airborne lidar data shows volume depolarization values to be less than 2.5% between 500 m and 2.5 km altitude, hence, particles in this range can be assumed to be of spherical shape. In-situ particle number concentrations measured at the Zeppelin Mountain Research Station at 474 m altitude peak at about 0.18 mu m diameter, which was also found for the microphysical inversion calculations performed at 850 m and 1500 m altitude. Number concentrations depend on the assumed extinction values, and slightly decrease with altitude as well as the effective particle diameter. A low imaginary part in the derived refractive index suggests weakly absorbing aerosols, which is confirmed by low black carbon concentrations, measured at the Zeppelin Mountain as well as on board the Polar 5 aircraft.
In the eighties, the analysis of satellite altimetry data leads to the major discovery of gravity lineations in the oceans, with wavelengths between 200 and 1400 km. While the existence of the 200 km scale undulations is widely accepted, undulations at scales larger than 400 km are still a matter of debate. In this paper, we revisit the topic of the large-scale geoid undulations over the oceans in the light of the satellite gravity data provided by the GRACE mission, considerably more precise than the altimetry data at wavelengths larger than 400 km. First, we develop a dedicated method of directional Poisson wavelet analysis on the sphere with significance testing, in order to detect and characterize directional structures in geophysical data on the sphere at different spatial scales. This method is particularly well suited for potential field analysis. We validate it on a series of synthetic tests, and then apply it to analyze recent gravity models, as well as a bathymetry data set independent from gravity. Our analysis confirms the existence of gravity undulations at large scale in the oceans, with characteristic scales between 600 and 2000 km. Their direction correlates well with present-day plate motion over the Pacific ocean, where they are particularly clear, and associated with a conjugate direction at 1500 km scale. A major finding is that the 2000 km scale geoid undulations dominate and had never been so clearly observed previously. This is due to the great precision of GRACE data at those wavelengths. Given the large scale of these undulations, they are most likely related to mantle processes. Taking into account observations and models from other geophysical information, as seismological tomography, convection and geochemical models and electrical conductivity in the mantle, we conceive that all these inputs indicate a directional fabric of the mantle flows at depth, reflecting how the history of subduction influences the organization of lower mantle upwellings.
The ensemble Kalman filter has emerged as a promising filter algorithm for nonlinear differential equations subject to intermittent observations. In this paper, we extend the well-known Kalman-Bucy filter for linear differential equations subject to continous observations to the ensemble setting and nonlinear differential equations. The proposed filter is called the ensemble Kalman-Bucy filter and its performance is demonstrated for a simple mechanical model (Langevin dynamics) subject to incremental observations of its velocity.
In order to examine variations in aftershock decay rate, we propose a Bayesian framework to estimate the {K, c, p}-values of the modified Omori law (MOL), lambda(t) = K(c + t)(-p). The Bayesian setting allows not only to produce a point estimator of these three parameters but also to assess their uncertainties and posterior dependencies with respect to the observed aftershock sequences. Using a new parametrization of the MOL, we identify the trade-off between the c and p-value estimates and discuss its dependence on the number of aftershocks. Then, we analyze the influence of the catalog completeness interval [t(start), t(stop)] on the various estimates. To test this Bayesian approach on natural aftershock sequences, we use two independent and non-overlapping aftershock catalogs of the same earthquakes in Japan. Taking into account the posterior uncertainties, we show that both the handpicked (short times) and the instrumental (long times) catalogs predict the same ranges of parameter values. We therefore conclude that the same MOL may be valid over short and long times.
We analyze a general class of difference operators on where is a multi-well potential and is a small parameter. We decouple the wells by introducing certain Dirichlet operators on regions containing only one potential well, and we shall treat the eigenvalue problem for as a small perturbation of these comparison problems. We describe tunneling by a certain interaction matrix, similar to the analysis for the Schrodinger operator [see Helffer and Sjostrand in Commun Partial Differ Equ 9:337-408, 1984], and estimate the remainder, which is exponentially small and roughly quadratic compared with the interaction matrix.
We develop a hydrostatic Hamiltonian particle-mesh (HPM) method for efficient long-term numerical integration of the atmosphere. In the HPM method, the hydrostatic approximation is interpreted as a holonomic constraint for the vertical position of particles. This can be viewed as defining a set of vertically buoyant horizontal meshes, with the altitude of each mesh point determined so as to satisfy the hydrostatic balance condition and with particles modelling horizontal advection between the moving meshes. We implement the method in a vertical-slice model and evaluate its performance for the simulation of idealized linear and nonlinear orographic flow in both dry and moist environments. The HPM method is able to capture the basic features of the gravity wave to a degree of accuracy comparable with that reported in the literature. The numerical solution in the moist experiment indicates that the influence of moisture on wave characteristics is represented reasonably well and the reduction of momentum flux is in good agreement with theoretical analysis.
The study of the semigroups OPn, of all orientation-preserving transformations on an n-element chain, and ORn, of all orientation-preserving or orientation-reversing transformations on an n-element chain, has began in [17] and [5]. In order to bring more insight into the subsemigroup structure of OPn and ORn, we characterize their maximal subsemigroups.
Both aftershocks and geodetically measured postseismic displacements are important markers of the stress relaxation process following large earthquakes. Postseismic displacements can be related to creep-like relaxation in the vicinity of the coseismic rupture by means of inversion methods. However, the results of slip inversions are typically non-unique and subject to large uncertainties. Therefore, we explore the possibility to improve inversions by mechanical constraints. In particular, we take into account the physical understanding that postseismic deformation is stress-driven, and occurs in the coseismically stressed zone. We do joint inversions for coseismic and postseismic slip in a Bayesian framework in the case of the 2004 M6.0 Parkfield earthquake. We perform a number of inversions with different constraints, and calculate their statistical significance. According to information criteria, the best result is preferably related to a physically reasonable model constrained by the stress-condition (namely postseismic creep is driven by coseismic stress) and the condition that coseismic slip and large aftershocks are disjunct. This model explains 97% of the coseismic displacements and 91% of the postseismic displacements during day 1-5 following the Parkfield event, respectively. It indicates that the major postseismic deformation can be generally explained by a stress relaxation process for the Parkfield case. This result also indicates that the data to constrain the coseismic slip model could be enriched postseismically. For the 2004 Parkfield event, we additionally observe asymmetric relaxation process at the two sides of the fault, which can be explained by material contrast ratio across the fault of similar to 1.15 in seismic velocity.
We consider quasicomplexes of pseudodifferential operators on a smooth compact manifold without boundary. To each quasicomplex we associate a complex of symbols. The quasicomplex is elliptic if this symbol complex is exact away from the zero section. We prove that elliptic quasicomplexes are Fredholm. Moreover, we introduce the Euler characteristic for elliptic quasicomplexes and prove a generalisation of the Atiyah-Singer index theorem.
In this work, we consider the reversible reaction between reactants of species A and B to form the product C. We consider this reaction as a prototype of many pseudobiomolecular reactions in biology, such as for instance molecular motors. We derive the exact probability density for the stochastic waiting time that a molecule of species A needs until the reaction with a molecule of species B takes place. We perform this computation taking fully into account the stochastic fluctuations in the number of molecules of species B. We show that at low numbers of participating molecules, the exact probability density differs from the exponential density derived by assuming the law of mass action. Finally, we discuss the condition of detailed balance in the exact stochastic and in the approximate treatment.
Different GRACE data analysis centers provide temporal variations of the Earth's gravity field as monthly, 10-daily or weekly solutions. These temporal mean fields cannot model the variations occurring during the respective time span. The aim of our approach is to extract as much temporal information as possible out of the given GRACE data. Therefore the temporal resolution shall be increased with the goal to derive daily snapshots. Yet, such an increase in temporal resolution is accompanied by a loss of redundancy and therefore in a reduced accuracy if the daily solutions are calculated individually. The approach presented here therefore introduces spatial and temporal correlations of the expected gravity field signal derived from geophysical models in addition to the daily observations, thus effectively constraining the spatial and temporal evolution of the GRACE solution. The GRACE data processing is then performed within the framework of a Kalman filter and smoother estimation procedure.
The approach is at first investigated in a closed-loop simulation scenario and then applied to the original GRACE observations (level-1B data) to calculate daily solutions as part of the gravity field model ITG-Grace2010. Finally, the daily models are compared to vertical GPS station displacements and ocean bottom pressure observations.
From these comparisons it can be concluded that particular in higher latitudes the daily solutions contain high-frequent temporal gravity field information and represent an improvement to existing geophysical models.
Bayesian selection of Markov Models for symbol sequences application to microsaccadic eye movements
(2012)
Complex biological dynamics often generate sequences of discrete events which can be described as a Markov process. The order of the underlying Markovian stochastic process is fundamental for characterizing statistical dependencies within sequences. As an example for this class of biological systems, we investigate the Markov order of sequences of microsaccadic eye movements from human observers. We calculate the integrated likelihood of a given sequence for various orders of the Markov process and use this in a Bayesian framework for statistical inference on the Markov order. Our analysis shows that data from most participants are best explained by a first-order Markov process. This is compatible with recent findings of a statistical coupling of subsequent microsaccade orientations. Our method might prove to be useful for a broad class of biological systems.
The principal object in noncommutative geometry is the spectral triple consisting of an algebra A, a Hilbert space H and a Dirac operator D. Field theories are incorporated in this approach by the spectral action principle, which sets the field theory action to Tr f (D-2/Lambda(2)), where f is a real function such that the trace exists and Lambda is a cutoff scale. In the low-energy (weak-field) limit, the spectral action reproduces reasonably well the known physics including the standard model. However, not much is known about the spectral action beyond the low-energy approximation. In this paper, after an extensive introduction to spectral triples and spectral actions, we study various expansions of the spectral actions (exemplified by the heat kernel). We derive the convergence criteria. For a commutative spectral triple, we compute the heat kernel on the torus up to the second order in gauge connection and consider limiting cases.
This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical in honour of Stuart Dowker's 75th birthday devoted to 'Applications of zeta functions and other spectral functions in mathematics and physics'.
The authors discuss the use of the discrepancy principle for statistical inverse problems, when the underlying operator is of trace class. Under this assumption the discrepancy principle is well defined, however a plain use of it may occasionally fail and it will yield sub-optimal rates. Therefore, a modification of the discrepancy is introduced, which corrects both of the above deficiencies. For a variety of linear regularization schemes as well as for conjugate gradient iteration it is shown to yield order optimal a priori error bounds under general smoothness assumptions. A posteriori error control is also possible, however at a sub-optimal rate, in general. This study uses and complements previous results for bounded deterministic noise.
In this study we analyse the error distribution in regional models of the geomagnetic field. Our main focus is to investigate the distribution of errors when combining two regional patches to obtain a global field from regional ones. To simulate errors in overlapping patches we choose two different data region shapes that resemble that scenario. First, we investigate the errors in elliptical regions and secondly we choose a region obtained from two overlapping circular spherical caps. We conduct a Monte-Carlo simulation using synthetic data to obtain the expected mean errors. For the elliptical regions the results are similar to the ones obtained for circular spherical caps: the maximum error at the boundary decreases towards the centre of the region. A new result emerges as errors at the boundary vary with azimuth, being largest in the major axis direction and minimal in the minor axis direction. Inside the region there is an error decay towards a minimum at the centre at a rate similar to the one in circular regions. In the case of two combined circular regions there is also an error decay from the boundary towards the centre. The minimum error occurs at the centre of the combined regions. The maximum error at the boundary occurs on the line containing the two cap centres, the minimum in the perpendicular direction where the two circular cap boundaries meet. The large errors at the boundary are eliminated by combining regional patches. We propose an algorithm for finding the boundary region that is applicable to irregularly shaped model regions.
The paper presents a classification of the basic types of admissible solutions of the general Friedmann equation with non-vanishing cosmological constant and for the case that radiation and matter do not couple. There are four distinct types. The classification uses first the discriminant of a polynomial of the third degree, closely related to the right hand side of the Friedmann equation. The decisive term is then a critical radiation density which can be calculated explicitly.