Refine
Year of publication
Document Type
- Article (1061) (remove)
Keywords
- random point processes (18)
- statistical mechanics (18)
- stochastic analysis (18)
- data assimilation (8)
- Bayesian inference (5)
- discrepancy principle (5)
- ensemble Kalman filter (5)
- linear term (5)
- Data assimilation (4)
- Earthquake interaction (4)
Institute
- Institut für Mathematik (1061) (remove)
This paper reports on the historical development of the Runge-Kutta methods beginning with the simple Euler method up to an embedded 13-stage method. Moreover, the design and the use of those methods under error order, stability and computation time conditions is edited for students of numerical analysis at undergraduate level. The second part presents applications in natural sciences, compares different methods and illustrates some of the difficulties of numerical solutions.
The estimation of a log-concave density on R is a canonical problem in the area of shape-constrained nonparametric inference. We present a Bayesian nonparametric approach to this problem based on an exponentiated Dirichlet process mixture prior and show that the posterior distribution converges to the log-concave truth at the (near-) minimax rate in Hellinger distance. Our proof proceeds by establishing a general contraction result based on the log-concave maximum likelihood estimator that prevents the need for further metric entropy calculations. We further present computationally more feasible approximations and both an empirical and hierarchical Bayes approach. All priors are illustrated numerically via simulations.
We study the Cauchy problem for a nonlinear elliptic equation with data on a piece S of the boundary surface partial derivative X. By the Cauchy problem is meant any boundary value problem for an unknown function u in a domain X with the property that the data on S, if combined with the differential equations in X, allows one to determine all derivatives of u on S by means of functional equations. In the case of real analytic data of the Cauchy problem, the existence of a local solution near S is guaranteed by the Cauchy-Kovalevskaya theorem. We discuss a variational setting of the Cauchy problem which always possesses a generalized solution.
Let v be a valuation of terms of type tau, assigning to each term t of type tau a value v(t) greater than or equal to 0. Let k greater than or equal to 1 be a natural number. An identity s approximate to t of type tau is called k- normal if either s = t or both s and t have value greater than or equal to k, and otherwise is called non-k-normal. A variety V of type tau is said to be k-normal if all its identities are k-normal, and non-k-normal otherwise. In the latter case, there is a unique smallest k-normal variety N-k(A) (V) to contain V , called the k-normalization of V. Inthe case k = 1, for the usual depth valuation of terms, these notions coincide with the well-known concepts of normal identity, normal variety, and normalization of a variety. I. Chajda has characterized the normalization of a variety by means of choice algebras. In this paper we generalize his results to a characterization of the k-normalization of a variety, using k-choice algebras. We also introduce the concept of a k-inflation algebra, and for the case that v is the usual depth valuation of terms, we prove that a variety V is k-normal iff it is closed under the formation of k- inflations, and that the k-normalization of V consists precisely of all homomorphic images of k-inflations of algebras in V
The generalized hybrid Monte Carlo (GHMC) method combines Metropolis corrected constant energy simulations with a partial random refreshment step in the particle momenta. The standard detailed balance condition requires that momenta are negated upon rejection of a molecular dynamics proposal step. The implication is a trajectory reversal upon rejection, which is undesirable when interpreting GHMC as thermostated molecular dynamics. We show that a modified detailed balance condition can be used to implement GHMC without momentum flips. The same modification can be applied to the generalized shadow hybrid Monte Carlo (GSHMC) method. Numerical results indicate that GHMC/GSHMC implementations with momentum flip display a favorable behavior in terms of sampling efficiency, i.e., the traditional GHMC/GSHMC implementations with momentum flip got the advantage of a higher acceptance rate and faster decorrelation of Monte Carlo samples. The difference is more pronounced for GHMC. We also numerically investigate the behavior of the GHMC method as a Langevin-type thermostat. We find that the GHMC method without momentum flip interferes less with the underlying stochastic molecular dynamics in terms of autocorrelation functions and it to be preferred over the GHMC method with momentum flip. The same finding applies to GSHMC.
The Net Reclassification Improvement (NRI) has become a popular metric for evaluating improvement in disease prediction models through the past years. The concept is relatively straightforward but usage and interpretation has been different across studies. While no thresholds exist for evaluating the degree of improvement, many studies have relied solely on the significance of the NRI estimate. However, recent studies recommend that statistical testing with the NRI should be avoided. We propose using confidence ellipses around the estimated values of event and non-event NRIs which might provide the best measure of variability around the point estimates. Our developments are illustrated using practical examples from EPIC-Potsdam study.
Model-informed precision dosing (MIPD) is a quantitative dosing framework that combines prior knowledge on the drug-disease-patient system with patient data from therapeutic drug/ biomarker monitoring (TDM) to support individualized dosing in ongoing treatment. Structural models and prior parameter distributions used in MIPD approaches typically build on prior clinical trials that involve only a limited number of patients selected according to some exclusion/inclusion criteria. Compared to the prior clinical trial population, the patient population in clinical practice can be expected to also include altered behavior and/or increased interindividual variability, the extent of which, however, is typically unknown. Here, we address the question of how to adapt and refine models on the level of the model parameters to better reflect this real-world diversity. We propose an approach for continued learning across patients during MIPD using a sequential hierarchical Bayesian framework. The approach builds on two stages to separate the update of the individual patient parameters from updating the population parameters. Consequently, it enables continued learning across hospitals or study centers, because only summary patient data (on the level of model parameters) need to be shared, but no individual TDM data. We illustrate this continued learning approach with neutrophil-guided dosing of paclitaxel. The present study constitutes an important step toward building confidence in MIPD and eventually establishing MIPD increasingly in everyday therapeutic use.
Transition path theory (TPT) for diffusion processes is a framework for analyzing the transitions of multiscale ergodic diffusion processes between disjoint metastable subsets of state space. Most methods for applying TPT involve the construction of a Markov state model on a discretization of state space that approximates the underlying diffusion process. However, the assumption of Markovianity is difficult to verify in practice, and there are to date no known error bounds or convergence results for these methods. We propose a Monte Carlo method for approximating the forward committor, probability current, and streamlines from TPT for diffusion processes. Our method uses only sample trajectory data and partitions of state space based on Voronoi tessellations. It does not require the construction of a Markovian approximating process. We rigorously prove error bounds for the approximate TPT objects and use these bounds to show convergence to their exact counterparts in the limit of arbitrarily fine discretization. We illustrate some features of our method by application to a process that solves the Smoluchowski equation on a triple-well potential.
We study those nonlinear partial differential equations which appear as Euler-Lagrange equations of variational problems. On defining weak boundary values of solutions to such equations we initiate the theory of Lagrangian boundary value problems in spaces of appropriate smoothness. We also analyse if the concept of mapping degree of current importance applies to Lagrangian problems.
We consider the problem of discrete time filtering (intermittent data assimilation) for differential equation models and discuss methods for its numerical approximation. The focus is on methods based on ensemble/particle techniques and on the ensemble Kalman filter technique in particular. We summarize as well as extend recent work on continuous ensemble Kalman filter formulations, which provide a concise dynamical systems formulation of the combined dynamics-assimilation problem. Possible extensions to fully nonlinear ensemble/particle based filters are also outlined using the framework of optimal transportation theory.
A new efficient algorithm is presented for joint diagonalization of several matrices. The algorithm is based on the Frobenius-norm formulation of the joint diagonalization problem, and addresses diagonalization with a general, non- orthogonal transformation. The iterative scheme of the algorithm is based on a multiplicative update which ensures the invertibility of the diagonalizer. The algorithm's efficiency stems from the special approximation of the cost function resulting in a sparse, block-diagonal Hessian to be used in the computation of the quasi-Newton update step. Extensive numerical simulations illustrate the performance of the algorithm and provide a comparison to other leading diagonalization methods. The results of such comparison demonstrate that the proposed algorithm is a viable alternative to existing state-of-the-art joint diagonalization algorithms. The practical use of our algorithm is shown for blind source separation problems
We generalize the popular ensemble Kalman filter to an ensemble transform filter, in which the prior distribution can take the form of a Gaussian mixture or a Gaussian kernel density estimator. The design of the filter is based on a continuous formulation of the Bayesian filter analysis step. We call the new filter algorithm the ensemble Gaussian-mixture filter (EGMF). The EGMF is implemented for three simple test problems (Brownian dynamics in one dimension, Langevin dynamics in two dimensions and the three-dimensional Lorenz-63 model). It is demonstrated that the EGMF is capable of tracking systems with non-Gaussian uni- and multimodal ensemble distributions.
Parallel File Systems like PVFS2 are a necessary compo nent for high-performance computing. The design of ef ;cient communication layers for these systems is still of great research interest. This paper presents a low- latency messaging method for PVFS2 dedicated for Gigabit Ether net networks and discusses relevant design issues. In con trast to other approaches, we argue that zero-copying can be achieved also for big messages without use of a rendez vous protocol. Further, ef;ciency within the communica tion layer like a small call stack plays an important role.
We prove a homology vanishing theorem for graphs with positive Bakry-' Emery curvature, analogous to a classic result of Bochner on manifolds [3]. Specifically, we prove that if a graph has positive curvature at every vertex, then its first homology group is trivial, where the notion of homology that we use for graphs is the path homology developed by Grigor'yan, Lin, Muranov, and Yau [11]. We moreover prove that the fundamental group is finite for graphs with positive Bakry-' Emery curvature, analogous to a classic result of Myers on manifolds [22]. The proofs draw on several separate areas of graph theory, including graph coverings, gain graphs, and cycle spaces, in addition to the Bakry-Emery curvature, path homology, and graph homotopy. The main results follow as a consequence of several different relationships developed among these different areas. Specifically, we show that a graph with positive curvature cannot have a non-trivial infinite cover preserving 3-cycles and 4-cycles, and give a combinatorial interpretation of the first path homology in terms of the cycle space of a graph. Furthermore, we relate gain graphs to graph homotopy and the fundamental group developed by Grigor'yan, Lin, Muranov, and Yau [12], and obtain an alternative proof of their result that the abelianization of the fundamental group of a graph is isomorphic to the first path homology over the integers.
In a recent paper, the Lefschetz number for endomorphisms (modulo trace class operators) of sequences of trace class curvature was introduced. We show that this is a well defined, canonical extension of the classical Lefschetz number and establish the homotopy invariance of this number. Moreover, we apply the results to show that the Lefschetz fixed point formula holds for geometric quasiendomorphisms of elliptic quasicomplexes.
Ensemble Kalman filter techniques are widely used to assimilate observations into dynamical models. The phase- space dimension is typically much larger than the number of ensemble members, which leads to inaccurate results in the computed covariance matrices. These inaccuracies can lead, among other things, to spurious long-range correlations, which can be eliminated by Schur-product-based localization techniques. In this article, we propose a new technique for implementing such localization techniques within the class of ensemble transform/square-root Kalman filters. Our approach relies on a continuous embedding of the Kalman filter update for the ensemble members, i.e. we state an ordinary differential equation (ODE) with solutions that, over a unit time interval, are equivalent to the Kalman filter update. The ODE formulation forms a gradient system with the observations as a cost functional. Besides localization, the new ODE ensemble formulation should also find useful application in the context of nonlinear observation operators and observations that arrive continuously in time.
Author summary <br /> Switching between local and global attention is a general strategy in human information processing. We investigate whether this strategy is a viable approach to model sequences of fixations generated by a human observer in a free viewing task with natural scenes. Variants of the basic model are used to predict the experimental data based on Bayesian inference. Results indicate a high predictive power for both aggregated data and individual differences across observers. The combination of a novel model with state-of-the-art Bayesian methods lends support to our two-state model using local and global internal attention states for controlling eye movements. <br /> Understanding the decision process underlying gaze control is an important question in cognitive neuroscience with applications in diverse fields ranging from psychology to computer vision. The decision for choosing an upcoming saccade target can be framed as a selection process between two states: Should the observer further inspect the information near the current gaze position (local attention) or continue with exploration of other patches of the given scene (global attention)? Here we propose and investigate a mathematical model motivated by switching between these two attentional states during scene viewing. The model is derived from a minimal set of assumptions that generates realistic eye movement behavior. We implemented a Bayesian approach for model parameter inference based on the model's likelihood function. In order to simplify the inference, we applied data augmentation methods that allowed the use of conjugate priors and the construction of an efficient Gibbs sampler. This approach turned out to be numerically efficient and permitted fitting interindividual differences in saccade statistics. Thus, the main contribution of our modeling approach is two-fold; first, we propose a new model for saccade generation in scene viewing. Second, we demonstrate the use of novel methods from Bayesian inference in the field of scan path modeling.
Author summary <br /> The use of orally inhaled drugs for treating lung diseases is appealing since they have the potential for lung selectivity, i.e. high exposure at the site of action -the lung- without excessive side effects. However, the degree of lung selectivity depends on a large number of factors, including physiochemical properties of drug molecules, patient disease state, and inhalation devices. To predict the impact of these factors on drug exposure and thereby to understand the characteristics of an optimal drug for inhalation, we develop a predictive mathematical framework (a "pharmacokinetic model"). In contrast to previous approaches, our model allows combining knowledge from different sources appropriately and its predictions were able to adequately predict different sets of clinical data. Finally, we compare the impact of different factors and find that the most important factors are the size of the inhaled particles, the affinity of the drug to the lung tissue, as well as the rate of drug dissolution in the lung. In contrast to the common belief, the solubility of a drug in the lining fluids is not found to be relevant. These findings are important to understand how inhaled drugs should be designed to achieve best treatment results in patients. <br /> The fate of orally inhaled drugs is determined by pulmonary pharmacokinetic processes such as particle deposition, pulmonary drug dissolution, and mucociliary clearance. Even though each single process has been systematically investigated, a quantitative understanding on the interaction of processes remains limited and therefore identifying optimal drug and formulation characteristics for orally inhaled drugs is still challenging. To investigate this complex interplay, the pulmonary processes can be integrated into mathematical models. However, existing modeling attempts considerably simplify these processes or are not systematically evaluated against (clinical) data. In this work, we developed a mathematical framework based on physiologically-structured population equations to integrate all relevant pulmonary processes mechanistically. A tailored numerical resolution strategy was chosen and the mechanistic model was evaluated systematically against data from different clinical studies. Without adapting the mechanistic model or estimating kinetic parameters based on individual study data, the developed model was able to predict simultaneously (i) lung retention profiles of inhaled insoluble particles, (ii) particle size-dependent pharmacokinetics of inhaled monodisperse particles, (iii) pharmacokinetic differences between inhaled fluticasone propionate and budesonide, as well as (iv) pharmacokinetic differences between healthy volunteers and asthmatic patients. Finally, to identify the most impactful optimization criteria for orally inhaled drugs, the developed mechanistic model was applied to investigate the impact of input parameters on both the pulmonary and systemic exposure. Interestingly, the solubility of the inhaled drug did not have any relevant impact on the local and systemic pharmacokinetics. Instead, the pulmonary dissolution rate, the particle size, the tissue affinity, and the systemic clearance were the most impactful potential optimization parameters. In the future, the developed prediction framework should be considered a powerful tool for identifying optimal drug and formulation characteristics.
We present a Monte Carlo technique for sampling from the canonical distribution in molecular dynamics. The method is built upon the Nose-Hoover constant temperature formulation and the generalized hybrid Monte Carlo method. In contrast to standard hybrid Monte Carlo methods only the thermostat degree of freedom is stochastically resampled during a Monte Carlo step.
In this paper, we investigate the continuous version of modified iterative Runge–Kutta-type methods for nonlinear inverse ill-posed problems proposed in a previous work. The convergence analysis is proved under the tangential cone condition, a modified discrepancy principle, i.e., the stopping time T is a solution of ∥𝐹(𝑥𝛿(𝑇))−𝑦𝛿∥=𝜏𝛿+ for some 𝛿+>𝛿, and an appropriate source condition. We yield the optimal rate of convergence.
It is well recognized that discontinuous analysis increments of sequential data assimilation systems, such as ensemble Kalman filters, might lead to spurious high-frequency adjustment processes in the model dynamics. Various methods have been devised to spread out the analysis increments continuously over a fixed time interval centred about the analysis time. Among these techniques are nudging and incremental analysis updates (IAU). Here we propose another alternative, which may be viewed as a hybrid of nudging and IAU and which arises naturally from a recently proposed continuous formulation of the ensemble Kalman analysis step. A new slow-fast extension of the popular Lorenz-96 model is introduced to demonstrate the properties of the proposed mollified ensemble Kalman filter.
We develop a multigrid, multiple time stepping scheme to reduce computational efforts for calculating complex stress interactions in a strike-slip 2D planar fault for the simulation of seismicity. The key elements of the multilevel solver are separation of length scale, grid-coarsening, and hierarchy. In this study the complex stress interactions are split into two parts: the first with a small contribution is computed on a coarse level, and the rest for strong interactions is on a fine level. This partition leads to a significant reduction of the number of computations. The reduction of complexity is even enhanced by combining the multigrid with multiple time stepping. Computational efficiency is enhanced by a factor of 10 while retaining a reasonable accuracy, compared to the original full matrix-vortex multiplication. The accuracy of solution and computational efficiency depend on a given cut-off radius that splits multiplications into the two parts. The multigrid scheme is constructed in such a way that it conserves stress in the entire half-space.
We establish essential steps of an iterative approach to operator algebras, ellipticity and Fredholm property on stratified spaces with singularities of second order. We cover, in particular, corner-degenerate differential operators. Our constructions are focused on the case where no additional conditions of trace and potential type are posed, but this case works well and will be considered in a forthcoming paper as a conclusion of the present calculus.
A new condensation principle
(2005)
We generalize del(A), which was introduced in [Schinfinity], to larger cardinals. For a regular cardinal kappa>N-0 we denote by del(kappa)(A) the statement that Asubset of or equal tokappa and for all regular theta>kappa(o), {X is an element of[L-theta[A]](<) : X &AND; &ISIN; &AND; otp (X &AND; Ord) &ISIN; Card (L[A&AND;X&AND;])} is stationary in [L-[A]](<). It was shown in [Sch&INFIN;] that &DEL;(N1) (A) can hold in a set-generic extension of L. We here prove that &DEL;(N2) (A) can hold in a set-generic extension of L as well. In both cases we in fact get equiconsistency theorems. This strengthens results of [Ra00] and [Ran01]. &DEL;(N3) () is equivalent with the existence of 0#
We prove a version of the Hopf-Rinow theorem with respect to path metrics on discrete spaces. The novel aspect is that we do not a priori assume local finiteness but isolate a local finiteness type condition, called essentially locally finite, that is indeed necessary. As a side product we identify the maximal weight, called the geodesic weight, generating the path metric in the situation when the space is complete with respect to any of the equivalent notions of completeness proven in the Hopf-Rinow theorem. As an application we characterize the graphs for which the resistance metric is a path metric induced by the graph structure.
Many applications, such as intermittent data assimilation, lead to a recursive application of Bayesian inference within a Monte Carlo context. Popular data assimilation algorithms include sequential Monte Carlo methods and ensemble Kalman filters (EnKFs). These methods differ in the way Bayesian inference is implemented. Sequential Monte Carlo methods rely on importance sampling combined with a resampling step, while EnKFs utilize a linear transformation of Monte Carlo samples based on the classic Kalman filter. While EnKFs have proven to be quite robust even for small ensemble sizes, they are not consistent since their derivation relies on a linear regression ansatz. In this paper, we propose another transform method, which does not rely on any a priori assumptions on the underlying prior and posterior distributions. The new method is based on solving an optimal transportation problem for discrete random variables.
We consider the Cauchy problem for the heat equation in a cylinder C (T) = X x (0, T) over a domain X in R (n) , with data on a strip lying on the lateral surface. The strip is of the form S x (0, T), where S is an open subset of the boundary of X. The problem is ill-posed. Under natural restrictions on the configuration of S, we derive an explicit formula for solutions of this problem.
In this note, we consider the semigroup O(X) of all order endomorphisms of an infinite chain X and the subset J of O(X) of all transformations alpha such that vertical bar Im(alpha)vertical bar = vertical bar X vertical bar. For an infinite countable chain X, we give a necessary and sufficient condition on X for O(X) = < J > to hold. We also present a sufficient condition on X for O(X) = < J > to hold, for an arbitrary infinite chain X.
We discuss Neumann problems for self-adjoint Laplacians on (possibly infinite) graphs. Under the assumption that the heat semigroup is ultracontractive we discuss the unique solvability for non-empty subgraphs with respect to the vertex boundary and provide analytic and probabilistic representations for Neumann solutions. A second result deals with Neumann problems on canonically compactifiable graphs with respect to the Royden boundary and provides conditions for unique solvability and analytic and probabilistic representations.
Extreme value statistics is a popular and frequently used tool to model the occurrence of large earthquakes. The problem of poor statistics arising from rare events is addressed by taking advantage of the validity of general statistical properties in asymptotic regimes. In this note, I argue that the use of extreme value statistics for the purpose of practically modeling the tail of the frequency-magnitude distribution of earthquakes can produce biased and thus misleading results because it is unknown to what degree the tail of the true distribution is sampled by data. Using synthetic data allows to quantify this bias in detail. The implicit assumption that the true M-max is close to the maximum observed magnitude M-max,M-observed restricts the class of the potential models a priori to those with M-max = M-max,M-observed + Delta M with an increment Delta M approximate to 0.5... 1.2. This corresponds to the simple heuristic method suggested by Wheeler (2009) and labeled :M-max equals M-obs plus an increment." The incomplete consideration of the entire model family for the frequency-magnitude distribution neglects, however, the scenario of a large so far unobserved earthquake.
We consider a perturbation of the de Rham complex on a compact manifold with boundary. This perturbation goes beyond the framework of complexes, and so cohomology does not apply to it. On the other hand, its curvature is "small", hence there is a natural way to introduce an Euler characteristic and develop a Lefschetz theory for the perturbation. This work is intended as an attempt to develop a cohomology theory for arbitrary sequences of linear mappings.
Transport molecules play a crucial role for cell viability. Amongst others, linear motors transport cargos along rope-like structures from one location of the cell to another in a stochastic fashion. Thereby each step of the motor, either forwards or backwards, bridges a fixed distance and requires several biochemical transformations, which are modeled as internal states of the motor. While moving along the rope, the motor can also detach and the walk is interrupted. We give here a mathematical formalization of such dynamics as a random process which is an extension of Random Walks, to which we add an absorbing state to model the detachment of the motor from the rope. We derive particular properties of such processes that have not been available before. Our results include description of the maximal distance reached from the starting point and the position from which detachment takes place. Finally, we apply our theoretical results to a concrete established model of the transport molecule Kinesin V.
Let A be a nonlinear differential operator on an open set X subset of R-n and S a closed subset of X. Given a class F of functions in X, the set S is said to be removable for F relative to A if any weak solution of A(u) = 0 in XS of class F satisfies this equation weakly in all of X. For the most extensively studied classes F, we show conditions on S which guarantee that S is removable for F relative to A.
We prove that if u is a locally Lipschitz continuous function on an open set chi subset of Rn + 1 satisfying the nonlinear heat equation partial derivative(t)u = Delta(vertical bar u vertical bar(p-1) u), p > 1, weakly away from the zero set u(-1) (0) in chi, then u is a weak solution to this equation in all of chi.
The study of the Cauchy problem for solutions of the heat equation in a cylindrical domain with data on the lateral surface by the Fourier method raises the problem of calculating the inverse Laplace transform of the entire function cos root z. This problem has no solution in the standard theory of the Laplace transform. We give an explicit formula for the inverse Laplace transform of cos root z using the theory of analytic functionals. This solution suits well to efficiently develop the regularization of solutions to Cauchy problems for parabolic equations with data on noncharacteristic surfaces.
We reconsider the fundamental work of Fichtner 2 and exhibit the permanental structure of the ideal Bose gas again, using a new approach which combines a characterization of infinitely divisible random measures (due to Kerstan, Kummer and Matthes 4, 6 and Mecke 9, 10) with a decomposition of the moment measures into its factorial measures due to Krickeberg 5. To be more precise, we exhibit the moment measures of all orders of the general ideal Bose gas in terms of certain loop integrals. This representation can be considered as a point process analogue of the old idea of Symanzik 15 that local times and self-crossings of the Brownian motion can be used as a tool in quantum field theory. Behind the notion of a general ideal Bose gas there is a class of infinitely divisible point processes of all orders with a Levy-measure belonging to some large class of measures containing that of the classical ideal Bose gas considered by Fichtner. It is well-known that the calculation of moments of higher order of point processes is notoriously complicated. See for instance Krickebergs calculations for the Poisson or the Cox process in 5. Relations to the work of Shirai, Takahashi 12 and Soshnikov 14 on permanental and determinantal processes are outlined.
A rigorous construction of the supersymmetric path integral associated to a compact spin manifold
(2022)
We give a rigorous construction of the path integral in N = 1/2 supersymmetry as an integral map for differential forms on the loop space of a compact spin manifold. It is defined on the space of differential forms which can be represented by extended iterated integrals in the sense of Chen and Getzler-Jones-Petrack. Via the iterated integral map, we compare our path integral to the non-commutative loop space Chern character of Guneysu and the second author. Our theory provides a rigorous background to various formal proofs of the Atiyah-Singer index theorem for twisted Dirac operators using supersymmetric path integrals, as investigated by Alvarez-Gaume, Atiyah, Bismut and Witten.
We discuss the chiral anomaly for a Weyl field in a curved background and show that a novel index theorem for the Lorentzian Dirac operator can be applied to describe the gravitational chiral anomaly. A formula for the total charge generated by the gravitational and gauge field background is derived directly in Lorentzian signature and in a mathematically rigorous manner. It contains a term identical to the integrand in the Atiyah-Singer index theorem and another term involving the.-invariant of the Cauchy hypersurfaces.
This paper presents a scalable E-band radar platform based on single-channel fully integrated transceivers (TRX) manufactured using 130-nm silicon-germanium (SiGe) BiCMOS technology. The TRX is suitable for flexible radar systems exploiting massive multiple-input-multipleoutput (MIMO) techniques for multidimensional sensing. A fully integrated fractional-N phase-locked loop (PLL) comprising a 39.5-GHz voltage-controlled oscillator is used to generate wideband frequency-modulated continuous-wave (FMCW) chirp for E-band radar front ends. The TRX is equipped with a vector modulator (VM) for high-speed carrier modulation and beam-forming techniques. A single TRX achieves 19.2-dBm maximum output power and 27.5-dB total conversion gain with input-referred 1-dB compression point of -10 dBm. It consumes 220 mA from 3.3-V supply and occupies 3.96 mm(2) silicon area. A two-channel radar platform based on full-custom TRXs and PLL was fabricated to demonstrate high-precision and high-resolution FMCW sensing. The radar enables up to 10-GHz frequency ramp generation in 74-84-GHz range, which results in 1.5-cm spatial resolution. Due to high output power, thus high signal-to-noise ratio (SNR), a ranging precision of 7.5 mu m for a target at 2 m was achieved. The proposed architecture supports scalable multichannel applications for automotive FMCW using a single local oscillator (LO).
We consider the semiclassical asymptotic expansion of the heat kernel coming from Witten's perturbation of the de Rham complex by a given function. For the index, one obtains a time-dependent integral formula which is evaluated by the method of stationary phase to derive the Poincare-Hopf theorem. We show how this method is related to approaches using the Thom form of Mathai and Quillen. Afterwards, we use a more general version of the stationary phase approximation in the case that the perturbing function has critical submanifolds to derive a degenerate version of the Poincare-Hopf theorem.
Paleoearthquakes and historic earthquakes are the most important source of information for the estimation of long-term earthquake recurrence intervals in fault zones, because corresponding sequences cover more than one seismic cycle. However, these events are often rare, dating uncertainties are enormous, and missing or misinterpreted events lead to additional problems. In the present study, I assume that the time to the next major earthquake depends on the rate of small and intermediate events between the large ones in terms of a clock change model. Mathematically, this leads to a Brownian passage time distribution for recurrence intervals. I take advantage of an earlier finding that under certain assumptions the aperiodicity of this distribution can be related to the Gutenberg-Richter b value, which can be estimated easily from instrumental seismicity in the region under consideration. In this way, both parameters of the Brownian passage time distribution can be attributed with accessible seismological quantities. This allows to reduce the uncertainties in the estimation of the mean recurrence interval, especially for short paleoearthquake sequences and high dating errors. Using a Bayesian framework for parameter estimation results in a statistical model for earthquake recurrence intervals that assimilates in a simple way paleoearthquake sequences and instrumental data. I present illustrative case studies from Southern California and compare the method with the commonly used approach of exponentially distributed recurrence times based on a stationary Poisson process.
A time-staggered semi-Lagrangian discretization of the rotating shallow-water equations is proposed and analysed. Application of regularization to the geopotential field used in the momentum equations leads to an unconditionally stable scheme. The analysis, together with a fully nonlinear example application, suggests that this approach is a promising, efficient, and accurate alternative to traditional schemes.
Let (M, g) be a closed Riemannian manifold of dimension n >= 3 and let f is an element of C-infinity (M), such that the operator P-f := Delta g + f is positive. If g is flat near some point p and f vanishes around p, we can define the mass of P1 as the constant term in the expansion of the Green function of P-f at p. In this paper, we establish many results on the mass of such operators. In particular, if f := n-2/n(n-1)s(g), i.e. if P-f is the Yamabe operator, we show the following result: assume that there exists a closed simply connected non-spin manifold M such that the mass is non-negative for every metric g as above on M, then the mass is non-negative for every such metric on every closed manifold of the same dimension as M. (C) 2016 Elsevier Inc. All rights reserved.
Background: Circulating infliximab (IFX) concentrations correlate with clinical outcomes, forming the basis of the IFX concentration monitoring in patients with Crohn's disease. This study aims to investigate and refine the exposure-response relationship by linking the disease activity markers "Crohn's disease activity index" (CDAI) and C-reactive protein (CRP) to IFX exposure. In addition, we aim to explore the correlations between different disease markers and exposure metrics.
Methods: Data from 47 Crohn's disease patients of a randomized controlled trial were analyzed post hoc. All patients had secondary treatment failure at inclusion and had received intensified IFX of 5 mg/kg every 4 weeks for up to 20 weeks. Graphical analyses were performed to explore exposure-response relationships. Metrics of exposure included area under the concentration-time curve (AUC) and trough concentrations (Cmin). Disease activity was measured by CDAI and CRP values, their change from baseline/last visit, and response/remission outcomes at week 12.
Results: Although trends toward lower Cmin and lower AUC in nonresponders were observed, neither CDAI nor CRP showed consistent trends of lower disease activity with higher IFX exposure across the 30 evaluated relationships. As can be expected, Cmin and AUC were strongly correlated with each other. Contrarily, the disease activity markers were only weakly correlated with each other.
Conclusions: No significant relationship between disease activity, as evaluated by CDAI or CRP, and IFX exposure was identified. AUC did not add benefit compared with Cmin. These findings support the continued use of Cmin and call for stringent objective disease activity (bio-)markers (eg, endoscopy) to form the basis of personalized IFX therapy for Crohn's disease patients with IFX treatment failure.
In the present paper, we study the problem of existence of honest and adaptive confidence sets for matrix completion. We consider two statistical models: the trace regression model and the Bernoulli model. In the trace regression model, we show that honest confidence sets that adapt to the unknown rank of the matrix exist even when the error variance is unknown. Contrary to this, we prove that in the Bernoulli model, honest and adaptive confidence sets exist only when the error variance is known a priori. In the course of our proofs, we obtain bounds for the minimax rates of certain composite hypothesis testing problems arising in low rank inference.
We study multi-dimensional gravitational models with scalar curvature nonlinearities of types R-1 and R-4. It is assumed that the corresponding higher dimensional spacetime manifolds undergo a spontaneous compactification to manifolds with a warped product structure. Special attention has been paid to the stability of the extra-dimensional factor spaces. It is shown that for certain parameter regions the systems allow for a freezing stabilization of these spaces. In particular, we find for the R-1 model that configurations with stabilized extra dimensions do not provide a late-time Acceleration (they are AdS), whereas the solution branch which allows. for accelerated expansion (the dS branch) is incompatible with stabilized factor spaces. In the case of the R-4 model, we obtain that the stability region in parameter space depends on the total dimension D = dim(M) of the higher dimensional spacetime M. Tor D > 8 the stability region consists of a single (absolutely stable) sector which is shielded from a conformal singularity (and an antigravity sector beyond it) by a potential barrier of infinite height and width. This sector is smoothly connected with the stability region of a curvature-linear model. For D < 8 an additional (metastable) sector exists Which is separated from the conformal singularity by a potential barrier of finite height and width so that systems in this sector are prone to collapse into the conformal singularity. This second sector is not smoothly connected with the first (absolutely stable) one. Several limiting cases and the possibility of inflation are discussed for the R-4 model
An intercomparison of aerosol backscatter lidar algorithms was performed in 2001 within the framework of the European Aerosol Research Lidar Network to Establish an Aerosol Climatology (EARLINET). The objective of this research was to test the correctness of the algorithms and the influence of the lidar ratio used by the various lidar teams involved in the EARLINET for calculation of backscatter-coefficient profiles from the lidar signals. The exercise consisted of processing synthetic lidar signals of various degrees of difficulty. One of these profiles contained height- dependent lidar ratios to test the vertical influence of those profiles on the various retrieval algorithms. Furthermore, a realistic incomplete overlap of laser beam and receiver field of view was introduced to remind the teams to take great care in the nearest range to the lidar. The intercomparison was performed in three stages with increasing knowledge on the input parameters. First, only the lidar signals were distributed; this is the most realistic stage. Afterward the lidar ratio profiles and the reference values at calibration height were provided. The unknown height- dependent lidar ratio had the largest influence on the retrieval, whereas the unknown reference value was of minor importance. These results show the necessity of making additional independent measurements, which can provide us with a suitable approximation of the lidar ratio. The final stage proves in general, that the data evaluation schemes of the different groups of lidar systems work well. (C) 2004 Optical Society of America
We propose a computational method (with acronym ALDI) for sampling from a given target distribution based on first-order (overdamped) Langevin dynamics which satisfies the property of affine invariance. The central idea of ALDI is to run an ensemble of particles with their empirical covariance serving as a preconditioner for their underlying Langevin dynamics. ALDI does not require taking the inverse or square root of the empirical covariance matrix, which enables application to high-dimensional sampling problems. The theoretical properties of ALDI are studied in terms of nondegeneracy and ergodicity. Furthermore, we study its connections to diffusion on Riemannian manifolds and Wasserstein gradient flows. Bayesian inference serves as a main application area for ALDI. In case of a forward problem with additive Gaussian measurement errors, ALDI allows for a gradient-free approximation in the spirit of the ensemble Kalman filter. A computational comparison between gradient-free and gradient-based ALDI is provided for a PDE constrained Bayesian inverse problem.
We analyze a general class of difference operators H-epsilon = T-epsilon + V-epsilon on l(2)(((epsilon)Z)(d)), where V-epsilon is a multi-well potential and epsilon is a small parameter. We construct approximate eigenfunctions in neighbourhoods of the different wells and give weighted l(2)-estimates for the difference of these and the exact eigenfunctions of the associated Dirichlet-operators.
In the limit 0 we analyse the generators H of families of reversible jump processes in Rd associated with a class of symmetric non-local Dirichlet-forms and show exponential decay of the eigenfunctions. The exponential rate function is a Finsler distance, given as solution of a certain eikonal equation. Fine results are sensitive to the rate function being C2 or just Lipschitz. Our estimates are analogous to the semiclassical Agmon estimates for differential operators of second order. They generalize and strengthen previous results on the lattice Zd. Although our final interest is in the (sub)stochastic jump process, technically this is a pure analysis paper, inspired by PDE techniques.
We equip the space of lattice cones with a coproduct which makes it a cograded, coaugmented, connnected coalgebra. The exponential generating sum and exponential generating integral on lattice cones can be viewed as linear maps on this space with values in the space of meromorphic germs with linear poles at zero. We investigate the subdivision properties-reminiscent of the inclusion-exclusion principle for the cardinal on finite sets-of such linear maps and show that these properties are compatible with the convolution quotient of maps on the coalgebra. Implementing the algebraic Birkhoff factorization procedure on the linear maps under consideration, we factorize the exponential generating sum as a convolution quotient of two maps, with each of the maps in the factorization satisfying a subdivision property. A direct computation shows that the polar decomposition of the exponential generating sum on a smooth lattice cone yields an Euler-Maclaurin formula. The compatibility with subdivisions of the convolution quotient arising in the algebraic Birkhoff factorization then yields the Euler-Maclaurin formula for any lattice cone. This provides a simple formula for the interpolating factor by means of a projection formula.
For several applications it is very useful to classify the linear or non-linear mappings by their summability properties. Absolutely summing operators and polynomials are prominent and classical examples of such setting. Here we are interested in the larger class of almost summing polynomials and we investigate their connections to other related notions of summability.
We study the mathematical structure underlying the concept of locality which lies at the heart of classical and quantum field theory, and develop a machinery used to preserve locality during the renormalisation procedure. Viewing renormalisation in the framework of Connes and Kreimer as the algebraic Birkhoff factorisation of characters on a Hopf algebra with values in a Rota-Baxter algebra, we build locality variants of these algebraic structures, leading to a locality variant of the algebraic Birkhoff factorisation. This provides an algebraic formulation of the conservation of locality while renormalising. As an application in the context of the Euler-Maclaurin formula on lattice cones, we renormalise the exponential generating function which sums over the lattice points in a lattice cone. As a consequence, for a suitable multivariate regularisation, renormalisation from the algebraic Birkhoff factorisation amounts to composition by a projection onto holomorphic multivariate germs.
Recent work on mutation-selection models has revealed that, under specific assumptions on the fitness function and the mutation rates, asymptotic estimates for the leading eigenvalue of the mutation-reproduction matrix may be obtained through a low-dimensional maximum principle in the limit N --> infinity (where N, or N-d with d greater than or equal to 1, is proportional to the number of types). In order to extend this variational principle to a larger class of models, we consider here a family of reversible matrices of asymptotic dimension N-d and identify conditions under which the high-dimensional Rayleigh-Ritz variational problem may be reduced to a low-dimensional one that yields the leading eigenvalue up to an error term of order 1/N. For a large class of mutation-selection models, this implies estimates for the mean fitness, as well as a concentration result for the ancestral distribution of types
The ensemble Kalman filter has emerged as a promising filter algorithm for nonlinear differential equations subject to intermittent observations. In this paper, we extend the well-known Kalman-Bucy filter for linear differential equations subject to continous observations to the ensemble setting and nonlinear differential equations. The proposed filter is called the ensemble Kalman-Bucy filter and its performance is demonstrated for a simple mechanical model (Langevin dynamics) subject to incremental observations of its velocity.
An explicit Dobrushin uniqueness region for Gibbs point processes with repulsive interactions
(2022)
We present a uniqueness result for Gibbs point processes with interactions that come from a non-negative pair potential; in particular, we provide an explicit uniqueness region in terms of activity z and inverse temperature beta. The technique used relies on applying to the continuous setting the classical Dobrushin criterion. We also present a comparison to the two other uniqueness methods of cluster expansion and disagreement percolation, which can also be applied for this type of interaction.
The paper presents an explicit example of a noncrossed product division algebra of index and exponent 8 over the field Q(s) (t). It is an iterated twisted function field in two variables D (x, sigma) (y, tau) over a quaternion division algebra D which is defined over the number field Q(root3, root-7). The automorphisms sigma and tau are computed by solving relative norm equations in extensions of number fields. The example is explicit in the sense that its structure constants are known. Moreover, it is pointed out that the same arguments also yield another example, this time over the field Q((s)) ((t)), given by an iterated twisted Laurent series ring D((x, sigma)) ((y, tau)) over the same quaternion division algebra D. (C) 2004 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
We show that the Dirac operator on a compact globally hyperbolic Lorentzian spacetime with spacelike Cauchy boundary is a Fredholm operator if appropriate boundary conditions are imposed. We prove that the index of this operator is given by the same expression as in the index formula of Atiyah-Patodi-Singer for Riemannian manifolds with boundary. The index is also shown to equal that of a certain operator constructed from the evolution operator and a spectral projection on the boundary. In case the metric is of product type near the boundary a Feynman parametrix is constructed.
We consider the problem of low rank matrix recovery in a stochastically noisy high-dimensional setting. We propose a new estimator for the low rank matrix, based on the iterative hard thresholding method, that is computationally efficient and simple. We prove that our estimator is optimal in terms of the Frobenius risk and in terms of the entry-wise risk uniformly over any change of orthonormal basis, allowing us to provide the limiting distribution of the estimator. When the design is Gaussian, we prove that the entry-wise bias of the limiting distribution of the estimator is small, which is of interest for constructing tests and confidence sets for low-dimensional subsets of entries of the low rank matrix.
We consider an initial problem for the Navier-Stokes type equations associated with the de Rham complex over R-n x[0, T], n >= 3, with a positive time T. We prove that the problem induces an open injective mappings on the scales of specially constructed function spaces of Bochner-Sobolev type. In particular, the corresponding statement on the intersection of these classes gives an open mapping theorem for smooth solutions to the Navier-Stokes equations.
Students of computer science studies enter university education with very different competencies, experience and knowledge. 145 datasets collected of freshmen computer science students by learning management systems in relation to exam outcomes and learning dispositions data (e. g. student dispositions, previous experiences and attitudes measured through self-reported surveys) has been exploited to identify indicators as predictors of academic success and hence make effective interventions to deal with an extremely heterogeneous group of students.
Concurrent observation technologies have made high-precision real-time data available in large quantities. Data assimilation (DA) is concerned with how to combine this data with physical models to produce accurate predictions. For spatial-temporal models, the ensemble Kalman filter with proper localisation techniques is considered to be a state-of-the-art DA methodology. This article proposes and investigates a localised ensemble Kalman Bucy filter for nonlinear models with short-range interactions. We derive dimension-independent and component-wise error bounds and show the long time path-wise error only has logarithmic dependence on the time range. The theoretical results are verified through some simple numerical tests.