Refine
Year of publication
Document Type
- Article (937) (remove)
Language
- English (937) (remove)
Keywords
- random point processes (18)
- statistical mechanics (18)
- stochastic analysis (18)
- data assimilation (8)
- Bayesian inference (5)
- discrepancy principle (5)
- ensemble Kalman filter (5)
- linear term (5)
- Data assimilation (4)
- Earthquake interaction (4)
Institute
- Institut für Mathematik (937) (remove)
According to Radzikowski’s celebrated results, bisolutions of a wave operator on a globally hyperbolic spacetime are of the Hadamard form iff they are given by a linear combination of distinguished parametrices i2(G˜aF−G˜F+G˜A−G˜R) in the sense of Duistermaat and Hörmander [Acta Math. 128, 183–269 (1972)] and Radzikowski [Commun. Math. Phys. 179, 529 (1996)]. Inspired by the construction of the corresponding advanced and retarded Green operator GA, GR as done by Bär, Ginoux, and Pfäffle {Wave Equations on Lorentzian Manifolds and Quantization [European Mathematical Society (EMS), Zürich, 2007]}, we construct the remaining two Green operators GF, GaF locally in terms of Hadamard series. Afterward, we provide the global construction of i2(G˜aF−G˜F), which relies on new techniques such as a well-posed Cauchy problem for bisolutions and a patching argument using Čech cohomology. This leads to global bisolutions of the Hadamard form, each of which can be chosen to be a Hadamard two-point-function, i.e., the smooth part can be adapted such that, additionally, the symmetry and the positivity condition are exactly satisfied.
Extreme value statistics is a popular and frequently used tool to model the occurrence of large earthquakes. The problem of poor statistics arising from rare events is addressed by taking advantage of the validity of general statistical properties in asymptotic regimes. In this note, I argue that the use of extreme value statistics for the purpose of practically modeling the tail of the frequency-magnitude distribution of earthquakes can produce biased and thus misleading results because it is unknown to what degree the tail of the true distribution is sampled by data. Using synthetic data allows to quantify this bias in detail. The implicit assumption that the true M-max is close to the maximum observed magnitude M-max,M-observed restricts the class of the potential models a priori to those with M-max = M-max,M-observed + Delta M with an increment Delta M approximate to 0.5... 1.2. This corresponds to the simple heuristic method suggested by Wheeler (2009) and labeled :M-max equals M-obs plus an increment." The incomplete consideration of the entire model family for the frequency-magnitude distribution neglects, however, the scenario of a large so far unobserved earthquake.
In this paper, we examine conditioning of the discretization of the Helmholtz problem. Although the discrete Helmholtz problem has been studied from different perspectives, to the best of our knowledge, there is no conditioning analysis for it. We aim to fill this gap in the literature. We propose a novel method in 1D to observe the near-zero eigenvalues of a symmetric indefinite matrix. Standard classification of ill-conditioning based on the matrix condition number is not true for the discrete Helmholtz problem. We relate the ill-conditioning of the discretization of the Helmholtz problem with the condition number of the matrix. We carry out analytical conditioning analysis in 1D and extend our observations to 2D with numerical observations. We examine several discretizations. We find different regions in which the condition number of the problem shows different characteristics. We also explain the general behavior of the solutions in these regions.
An explicit Dobrushin uniqueness region for Gibbs point processes with repulsive interactions
(2022)
We present a uniqueness result for Gibbs point processes with interactions that come from a non-negative pair potential; in particular, we provide an explicit uniqueness region in terms of activity z and inverse temperature beta. The technique used relies on applying to the continuous setting the classical Dobrushin criterion. We also present a comparison to the two other uniqueness methods of cluster expansion and disagreement percolation, which can also be applied for this type of interaction.
Symmetric, elegantly entangled structures are a curious mathematical construction that has found their way into the heart of the chemistry lab and the toolbox of constructive geometry. Of particular interest are those structures—knots, links and weavings—which are composed locally of simple twisted strands and are globally symmetric. This paper considers the symmetric tangling of multiple 2-periodic honeycomb networks. We do this using a constructive methodology borrowing elements of graph theory, low-dimensional topology and geometry. The result is a wide-ranging enumeration of symmetric tangled honeycomb networks, providing a foundation for their exploration in both the chemistry lab and the geometers toolbox.
Conventional embeddings of the edge-graphs of Platonic polyhedra, {f,z}, where f,z denote the number of edges in each face and the edge-valence at each vertex, respectively, are untangled in that they can be placed on a sphere (S-2) such that distinct edges do not intersect, analogous to unknotted loops, which allow crossing-free drawings of S-1 on the sphere. The most symmetric (flag-transitive) realizations of those polyhedral graphs are those of the classical Platonic polyhedra, whose symmetries are *2fz, according to Conway's two-dimensional (2D) orbifold notation (equivalent to Schonflies symbols I-h, O-h, and T-d). Tangled Platonic {f,z} polyhedra-which cannot lie on the sphere without edge-crossings-are constructed as windings of helices with three, five, seven,... strands on multigenus surfaces formed by tubifying the edges of conventional Platonic polyhedra, have (chiral) symmetries 2fz (I, O, and T), whose vertices, edges, and faces are symmetrically identical, realized with two flags. The analysis extends to the "theta(z)" polyhedra, {2,z}. The vertices of these symmetric tangled polyhedra overlap with those of the Platonic polyhedra; however, their helicity requires curvilinear (or kinked) edges in all but one case. We show that these 2fz polyhedral tangles are maximally symmetric; more symmetric embeddings are necessarily untangled. On one hand, their topologies are very constrained: They are either self-entangled graphs (analogous to knots) or mutually catenated entangled compound polyhedra (analogous to links). On the other hand, an endless variety of entanglements can be realized for each topology. Simpler examples resemble patterns observed in synthetic organometallic materials and clathrin coats in vivo.
Model-informed precision dosing (MIPD) is a quantitative dosing framework that combines prior knowledge on the drug-disease-patient system with patient data from therapeutic drug/ biomarker monitoring (TDM) to support individualized dosing in ongoing treatment. Structural models and prior parameter distributions used in MIPD approaches typically build on prior clinical trials that involve only a limited number of patients selected according to some exclusion/inclusion criteria. Compared to the prior clinical trial population, the patient population in clinical practice can be expected to also include altered behavior and/or increased interindividual variability, the extent of which, however, is typically unknown. Here, we address the question of how to adapt and refine models on the level of the model parameters to better reflect this real-world diversity. We propose an approach for continued learning across patients during MIPD using a sequential hierarchical Bayesian framework. The approach builds on two stages to separate the update of the individual patient parameters from updating the population parameters. Consequently, it enables continued learning across hospitals or study centers, because only summary patient data (on the level of model parameters) need to be shared, but no individual TDM data. We illustrate this continued learning approach with neutrophil-guided dosing of paclitaxel. The present study constitutes an important step toward building confidence in MIPD and eventually establishing MIPD increasingly in everyday therapeutic use.
We study boundary value problems for first-order elliptic differential operators on manifolds with compact boundary. The adapted boundary operator need not be selfadjoint and the boundary condition need not be pseudo-local.We show the equivalence of various characterisations of elliptic boundary conditions and demonstrate how the boundary conditions traditionally considered in the literature fit in our framework. The regularity of the solutions up to the boundary is proven. We show that imposing elliptic boundary conditions yields a Fredholm operator if the manifold is compact. We provide examples which are conveniently treated by our methods.
Dynamical models make specific assumptions about cognitive processes that generate human behavior. In data assimilation, these models are tested against timeordered data. Recent progress on Bayesian data assimilation demonstrates that this approach combines the strengths of statistical modeling of individual differences with the those of dynamical cognitive models.
We discuss Neumann problems for self-adjoint Laplacians on (possibly infinite) graphs. Under the assumption that the heat semigroup is ultracontractive we discuss the unique solvability for non-empty subgraphs with respect to the vertex boundary and provide analytic and probabilistic representations for Neumann solutions. A second result deals with Neumann problems on canonically compactifiable graphs with respect to the Royden boundary and provides conditions for unique solvability and analytic and probabilistic representations.
We show that local deformations, near closed subsets, of solutions to open partial differential relations can be extended to global deformations, provided all but the highest derivatives stay constant along the subset. The applicability of this general result is illustrated by a number of examples, dealing with convex embeddings of hypersurfaces, differential forms, and lapse functions in Lorentzian geometry.
The main application is a general approximation result by sections that have very restrictive local properties on open dense subsets. This shows, for instance, that given any K is an element of Double-struck capital R every manifold of dimension at least 2 carries a complete C-1,C- 1-metric which, on a dense open subset, is smooth with constant sectional curvature K. Of course, this is impossible for C-2-metrics in general.
We present a technique for the enumeration of all isotopically distinct ways of tiling a hyperbolic surface of finite genus, possibly nonorientable and with punctures and boundary. This generalizes the enumeration using Delaney--Dress combinatorial tiling theory of combinatorial classes of tilings to isotopy classes of tilings. To accomplish this, we derive an action of the mapping class group of the orbifold associated to the symmetry group of a tiling on the set of tilings. We explicitly give descriptions and presentations of semipure mapping class groups and of tilings as decorations on orbifolds. We apply this enumerative result to generate an array of isotopically distinct tilings of the hyperbolic plane with symmetries generated by rotations that are commensurate with the threedimensional symmetries of the primitive, diamond, and gyroid triply periodic minimal surfaces, which have relevance to a variety of physical systems.
Randomised one-step time integration methods for deterministic operator differential equations
(2022)
Uncertainty quantification plays an important role in problems that involve inferring a parameter of an initial value problem from observations of the solution. Conrad et al. (Stat Comput 27(4):1065-1082, 2017) proposed randomisation of deterministic time integration methods as a strategy for quantifying uncertainty due to the unknown time discretisation error. We consider this strategy for systems that are described by deterministic, possibly time-dependent operator differential equations defined on a Banach space or a Gelfand triple. Our main results are strong error bounds on the random trajectories measured in Orlicz norms, proven under a weaker assumption on the local truncation error of the underlying deterministic time integration method. Our analysis establishes the theoretical validity of randomised time integration for differential equations in infinite-dimensional settings.
Variational bayesian inference for nonlinear hawkes process with gaussian process self-effects
(2022)
Traditionally, Hawkes processes are used to model time-continuous point processes with history dependence. Here, we propose an extended model where the self-effects are of both excitatory and inhibitory types and follow a Gaussian Process. Whereas previous work either relies on a less flexible parameterization of the model, or requires a large amount of data, our formulation allows for both a flexible model and learning when data are scarce. We continue the line of work of Bayesian inference for Hawkes processes, and derive an inference algorithm by performing inference on an aggregated sum of Gaussian Processes. Approximate Bayesian inference is achieved via data augmentation, and we describe a mean-field variational inference approach to learn the model parameters. To demonstrate the flexibility of the model we apply our methodology on data from different domains and compare it to previously reported results.
A rigorous construction of the supersymmetric path integral associated to a compact spin manifold
(2022)
We give a rigorous construction of the path integral in N = 1/2 supersymmetry as an integral map for differential forms on the loop space of a compact spin manifold. It is defined on the space of differential forms which can be represented by extended iterated integrals in the sense of Chen and Getzler-Jones-Petrack. Via the iterated integral map, we compare our path integral to the non-commutative loop space Chern character of Guneysu and the second author. Our theory provides a rigorous background to various formal proofs of the Atiyah-Singer index theorem for twisted Dirac operators using supersymmetric path integrals, as investigated by Alvarez-Gaume, Atiyah, Bismut and Witten.
Background
Cytochrome P450 (CYP) 3A contributes to the metabolism of many approved drugs. CYP3A perpetrator drugs can profoundly alter the exposure of CYP3A substrates. However, effects of such drug-drug interactions are usually reported as maximum effects rather than studied as time-dependent processes. Identification of the time course of CYP3A modulation can provide insight into when significant changes to CYP3A activity occurs, help better design drug-drug interaction studies, and manage drug-drug interactions in clinical practice.
Objective
We aimed to quantify the time course and extent of the in vivo modulation of different CYP3A perpetrator drugs on hepatic CYP3A activity and distinguish different modulatory mechanisms by their time of onset, using pharmacologically inactive intravenous microgram doses of the CYP3A-specific substrate midazolam, as a marker of CYP3A activity.
Methods
Twenty-four healthy individuals received an intravenous midazolam bolus followed by a continuous infusion for 10 or 36 h. Individuals were randomized into four arms: within each arm, two individuals served as a placebo control and, 2 h after start of the midazolam infusion, four individuals received the CYP3A perpetrator drug: voriconazole (inhibitor, orally or intravenously), rifampicin (inducer, orally), or efavirenz (activator, orally). After midazolam bolus administration, blood samples were taken every hour (rifampicin arm) or every 15 min (remaining study arms) until the end of midazolam infusion. A total of 1858 concentrations were equally divided between midazolam and its metabolite, 1'-hydroxymidazolam. A nonlinear mixed-effects population pharmacokinetic model of both compounds was developed using NONMEM (R). CYP3A activity modulation was quantified over time, as the relative change of midazolam clearance encountered by the perpetrator drug, compared to the corresponding clearance value in the placebo arm.
Results
Time course of CYP3A modulation and magnitude of maximum effect were identified for each perpetrator drug. While efavirenz CYP3A activation was relatively fast and short, reaching a maximum after approximately 2-3 h, the induction effect of rifampicin could only be observed after 22 h, with a maximum after approximately 28-30 h followed by a steep drop to almost baseline within 1-2 h. In contrast, the inhibitory impact of both oral and intravenous voriconazole was prolonged with a steady inhibition of CYP3A activity followed by a gradual increase in the inhibitory effect until the end of sampling at 8 h. Relative maximum clearance changes were +59.1%, +46.7%, -70.6%, and -61.1% for efavirenz, rifampicin, oral voriconazole, and intravenous voriconazole, respectively.
Conclusions
We could distinguish between different mechanisms of CYP3A modulation by the time of onset. Identification of the time at which clearance significantly changes, per perpetrator drug, can guide the design of an optimal sampling schedule for future drug-drug interaction studies. The impact of a short-term combination of different perpetrator drugs on the paradigm CYP3A substrate midazolam was characterized and can define combination intervals in which no relevant interaction is to be expected.
Ulcerative colitis (UC) is part of the inflammatory bowels diseases, and moderate to severe UC patients can be treated with anti-tumour necrosis alpha monoclonal antibodies, including infliximab (IFX). Even though treatment of UC patients by IFX has been in place for over a decade, many gaps in modelling of IFX PK in this population remain. This is even more true for acute severe UC (ASUC) patients for which early prediction of IFX pharmacokinetic (PK) could highly improve treatment outcome. Thus, this review aims to compile and analyse published population PK models of IFX in UC and ASUC patients, and to assess the current knowledge on disease activity impact on IFX PK. For this, a semi-systematic literature search was conducted, from which 26 publications including a population PK model analysis of UC patients receiving IFX therapy were selected. Amongst those, only four developed a model specifically for UC patients, and only three populations included severe UC patients. Investigations of disease activity impact on PK were reported in only 4 of the 14 models selected. In addition, the lack of reported model codes and assessment of predictive performance make the use of published models in a clinical setting challenging. Thus, more comprehensive investigation of PK in UC and ASUC is needed as well as more adequate reports on developed models and their evaluation in order to apply them in a clinical setting.
We propose a global geomagnetic field model for the last 14 thousand years, based on thermoremanent records. We call the model ArchKalmag14k. ArchKalmag14k is constructed by modifying recently proposed algorithms, based on space-time correlations. Due to the amount of data and complexity of the model, the full Bayesian posterior is numerically intractable. To tackle this, we sequentialize the inversion by implementing a Kalman-filter with a fixed time step. Every step consists of a prediction, based on a degree dependent temporal covariance, and a correction via Gaussian process regression. Dating errors are treated via a noisy input formulation. Cross correlations are reintroduced by a smoothing algorithm and model parameters are inferred from the data. Due to the specific statistical nature of the proposed algorithms, the model comes with space and time-dependent uncertainty estimates. The new model ArchKalmag14k shows less variation in the large-scale degrees than comparable models. Local predictions represent the underlying data and agree with comparable models, if the location is sampled well. Uncertainties are bigger for earlier times and in regions of sparse data coverage. We also use ArchKalmag14k to analyze the appearance and evolution of the South Atlantic anomaly together with reverse flux patches at the core-mantle boundary, considering the model uncertainties. While we find good agreement with earlier models for recent times, our model suggests a different evolution of intensity minima prior to 1650 CE. In general, our results suggest that prior to 6000 BCE the data is not sufficient to support global models.
Let X be an infinite linearly ordered set and let Y be a nonempty subset of X. We calculate the relative rank of the semigroup OP(X,Y) of all orientation-preserving transformations on X with restricted range Y modulo the semigroup O(X,Y) of all order-preserving transformations on X with restricted range Y. For Y = X, we characterize the relative generating sets of minimal size.
Alpine ecosystems on the Tibetan Plateau are being threatened by ongoing climate warming and intensified human activities. Ecological time-series obtained from sedimentary ancient DNA (sedaDNA) are essential for understanding past ecosystem and biodiversity dynamics on the Tibetan Plateau and their responses to climate change at a high taxonomic resolution. Hitherto only few but promising studies have been published on this topic. The potential and limitations of using sedaDNA on the Tibetan Plateau are not fully understood. Here, we (i) provide updated knowledge of and a brief introduction to the suitable archives, region-specific taphonomy, state-of-the-art methodologies, and research questions of sedaDNA on the Tibetan Plateau; (ii) review published and ongoing sedaDNA studies from the Tibetan Plateau; and (iii) give some recommendations for future sedaDNA study designs. Based on the current knowledge of taphonomy, we infer that deep glacial lakes with freshwater and high clay sediment input, such as those from the southern and southeastern Tibetan Plateau, may have a high potential for sedaDNA studies. Metabarcoding (for microorganisms and plants), metagenomics (for ecosystems), and hybridization capture (for prehistoric humans) are three primary sedaDNA approaches which have been successfully applied on the Tibetan Plateau, but their power is still limited by several technical issues, such as PCR bias and incompleteness of taxonomic reference databases. Setting up high-quality and open-access regional taxonomic reference databases for the Tibetan Plateau should be given priority in the future. To conclude, the archival, taphonomic, and methodological conditions of the Tibetan Plateau are favorable for performing sedaDNA studies. More research should be encouraged to address questions about long-term ecological dynamics at ecosystem scale and to bring the paleoecology of the Tibetan Plateau into a new era.
In this work, we present Raman lidar data (from a Nd:YAG operating at 355 nm, 532 nm and 1064 nm) from the international research village Ny-Alesund for the time period of January to April 2020 during the Arctic haze season of the MOSAiC winter. We present values of the aerosol backscatter, the lidar ratio and the backscatter Angstrom exponent, though the latter depends on wavelength. The aerosol polarization was generally below 2%, indicating mostly spherical particles. We observed that events with high backscatter and high lidar ratio did not coincide. In fact, the highest lidar ratios (LR > 75 sr at 532 nm) were already found by January and may have been caused by hygroscopic growth, rather than by advection of more continental aerosol. Further, we performed an inversion of the lidar data to retrieve a refractive index and a size distribution of the aerosol. Our results suggest that in the free troposphere (above approximate to 2500 m) the aerosol size distribution is quite constant in time, with dominance of small particles with a modal radius well below 100 nm. On the contrary, below approximate to 2000 m in altitude, we frequently found gradients in aerosol backscatter and even size distribution, sometimes in accordance with gradients of wind speed, humidity or elevated temperature inversions, as if the aerosol was strongly modified by vertical displacement in what we call the "mechanical boundary layer". Finally, we present an indication that additional meteorological soundings during MOSAiC campaign did not necessarily improve the fidelity of air backtrajectories.
We introduce the class of "smooth rough paths" and study their main properties. Working in a smooth setting allows us to discard sewing arguments and focus on algebraic and geometric aspects. Specifically, a Maurer-Cartan perspective is the key to a purely algebraic form of Lyons' extension theorem, the renormalization of rough paths following up on [Bruned et al.: A rough path perspective on renormalization, J. Funct. Anal. 277(11), 2019], as well as a related notion of "sum of rough paths". We first develop our ideas in a geometric rough path setting, as this best resonates with recent works on signature varieties, as well as with the renormalization of geometric rough paths. We then explore extensions to the quasi-geometric and the more general Hopf algebraic setting.
We construct and examine the prototype of a deep learning-based ground-motion model (GMM) that is both fully data driven and nonergodic. We formulate ground-motion modeling as an image processing task, in which a specific type of neural network, the U-Net, relates continuous, horizontal maps of earthquake predictive parameters to sparse observations of a ground-motion intensity measure (IM). The processing of map-shaped data allows the natural incorporation of absolute earthquake source and observation site coordinates, and is, therefore, well suited to include site-, source-, and path-specific amplification effects in a nonergodic GMM. Data-driven interpolation of the IM between observation points is an inherent feature of the U-Net and requires no a priori assumptions. We evaluate our model using both a synthetic dataset and a subset of observations from the KiK-net strong motion network in the Kanto basin in Japan. We find that the U-Net model is capable of learning the magnitude???distance scaling, as well as site-, source-, and path-specific amplification effects from a strong motion dataset. The interpolation scheme is evaluated using a fivefold cross validation and is found to provide on average unbiased predictions. The magnitude???distance scaling as well as the site amplification of response spectral acceleration at a period of 1 s obtained for the Kanto basin are comparable to previous regional studies.
We study superharmonic functions for Schrodinger operators on general weighted graphs. Specifically, we prove two decompositions which both go under the name Riesz decomposition in the literature. The first one decomposes a superharmonic function into a harmonic and a potential part. The second one decomposes a superharmonic function into a sum of superharmonic functions with certain upper bounds given by prescribed superharmonic functions. As application we show a Brelot type theorem.
We adapt the Faddeev-LeVerrier algorithm for the computation of characteristic polynomials to the computation of the Pfaffian of a skew-symmetric matrix. This yields a very simple, easy to implement and parallelize algorithm of computational cost O(n(beta+1)) where nis the size of the matrix and O(n(beta)) is the cost of multiplying n x n-matrices, beta is an element of [2, 2.37286). We compare its performance to that of other algorithms and show how it can be used to compute the Euler form of a Riemannian manifold using computer algebra.
In this short survey article, we showcase a number of non-trivial geometric problems that have recently been resolved by marrying methods from functional calculus and real-variable harmonic analysis. We give a brief description of these methods as well as their interplay. This is a succinct survey that hopes to inspire geometers and analysts alike to study these methods so that they can be further developed to be potentially applied to a broader range of questions.
In the semiclassical limit (h) over bar -> 0, we analyze a class of self-adjoint Schrodinger operators H-(h) over bar = (h) over bar L-2 + (h) over barW + V center dot id(E) acting on sections of a vector bundle E over an oriented Riemannian manifold M where L is a Laplace type operator, W is an endomorphism field and the potential energy V has non-degenerate minima at a finite number of points m(1),... m(r) is an element of M, called potential wells. Using quasimodes of WKB-type near m(j) for eigenfunctions associated with the low lying eigenvalues of H-(h) over bar, we analyze the tunneling effect, i.e. the splitting between low lying eigenvalues, which e.g. arises in certain symmetric configurations. Technically, we treat the coupling between different potential wells by an interaction matrix and we consider the case of a single minimal geodesic (with respect to the associated Agmon metric) connecting two potential wells and the case of a submanifold of minimal geodesics of dimension l + 1. This dimension l determines the polynomial prefactor for exponentially small eigenvalue splitting.
The Rarita-Schwinger operator is the twisted Dirac operator restricted to 3/2-spinors. Rarita-Schwinger fields are solutions of this operator which are in addition divergence-free. This is an overdetermined problem and solutions are rare; it is even more unexpected for there to be large dimensional spaces of solutions. In this paper we prove the existence of a sequence of compact manifolds in any given dimension greater than or equal to 4 for which the dimension of the space of Rarita-Schwinger fields tends to infinity. These manifolds are either simply connected Kahler-Einstein spin with negative Einstein constant, or products of such spaces with flat tori. Moreover, we construct Calabi-Yau manifolds of even complex dimension with more linearly independent Rarita-Schwinger fields than flat tori of the same dimension.
We present a supervised learning method to learn the propagator map of a dynamical system from partial and noisy observations. In our computationally cheap and easy-to-implement framework, a neural network consisting of random feature maps is trained sequentially by incoming observations within a data assimilation procedure. By employing Takens's embedding theorem, the network is trained on delay coordinates. We show that the combination of random feature maps and data assimilation, called RAFDA, outperforms standard random feature maps for which the dynamics is learned using batch data.
We provide an overview of the tools and techniques of resurgence theory used in the Borel-ecalle resummation method, which we then apply to the massless Wess-Zumino model. Starting from already known results on the anomalous dimension of the Wess-Zumino model, we solve its renormalisation group equation for the two-point function in a space of formal series. We show that this solution is 1-Gevrey and that its Borel transform is resurgent. The Schwinger-Dyson equation of the model is then used to prove an asymptotic exponential bound for the Borel transformed two-point function on a star-shaped domain of a suitable ramified complex plane. This proves that the two-point function of the Wess-Zumino model is Borel-ecalle summable.
For a closed, connected direct product Riemannian manifold (M, g) = (M-1, g(1)) x ... x (M-l, g(l)), we define its multiconformal class [[g]] as the totality {integral(2)(1)g(1) circle plus center dot center dot center dot integral(2)(l)g(l)} of all Riemannian metrics obtained from multiplying the metric gi of each factor Mi by a positive function fi on the total space M. A multiconformal class [[ g]] contains not only all warped product type deformations of g but also the whole conformal class [(g) over tilde] of every (g) over tilde is an element of[[ g]]. In this article, we prove that [[g]] contains a metric of positive scalar curvature if and only if the conformal class of some factor (Mi, gi) does, under the technical assumption dim M-i = 2. We also show that, even in the case where every factor (M-i, g(i)) has positive scalar curvature, [[g]] contains a metric of scalar curvature constantly equal to -1 and with arbitrarily large volume, provided l = 2 and dim M = 3.
Data-driven prediction and physics-agnostic machine-learning methods have attracted increased interest in recent years achieving forecast horizons going well beyond those to be expected for chaotic dynamical systems. In a separate strand of research data-assimilation has been successfully used to optimally combine forecast models and their inherent uncertainty with incoming noisy observations. The key idea in our work here is to achieve increased forecast capabilities by judiciously combining machine-learning algorithms and data assimilation. We combine the physics-agnostic data -driven approach of random feature maps as a forecast model within an ensemble Kalman filter data assimilation procedure. The machine-learning model is learned sequentially by incorporating incoming noisy observations. We show that the obtained forecast model has remarkably good forecast skill while being computationally cheap once trained. Going beyond the task of forecasting, we show that our method can be used to generate reliable ensembles for probabilistic forecasting as well as to learn effective model closure in multi-scale systems. (C) 2021 Elsevier B.V. All rights reserved.
In this paper, we bring together the worlds of model order reduction for stochastic linear systems and H-2-optimal model order reduction for deterministic systems. In particular, we supplement and complete the theory of error bounds for model order reduction of stochastic differential equations. With these error bounds, we establish a link between the output error for stochastic systems (with additive and multiplicative noise) and modified versions of the H-2-norm for both linear and bilinear deterministic systems. When deriving the respective optimality conditions for minimizing the error bounds, we see that model order reduction techniques related to iterative rational Krylov algorithms (IRKA) are very natural and effective methods for reducing the dimension of large-scale stochastic systems with additive and/or multiplicative noise. We apply modified versions of (linear and bilinear) IRKA to stochastic linear systems and show their efficiency in numerical experiments.
Identification of unknown parameters on the basis of partial and noisy data is a challenging task, in particular in high dimensional and non-linear settings. Gaussian approximations to the problem, such as ensemble Kalman inversion, tend to be robust and computationally cheap and often produce astonishingly accurate estimations despite the simplifying underlying assumptions. Yet there is a lot of room for improvement, specifically regarding a correct approximation of a non-Gaussian posterior distribution. The tempered ensemble transform particle filter is an adaptive Sequential Monte Carlo (SMC) method, whereby resampling is based on optimal transport mapping. Unlike ensemble Kalman inversion, it does not require any assumptions regarding the posterior distribution and hence has shown to provide promising results for non-linear non-Gaussian inverse problems. However, the improved accuracy comes with the price of much higher computational complexity, and the method is not as robust as ensemble Kalman inversion in high dimensional problems. In this work, we add an entropy-inspired regularisation factor to the underlying optimal transport problem that allows the high computational cost to be considerably reduced via Sinkhorn iterations. Further, the robustness of the method is increased via an ensemble Kalman inversion proposal step before each update of the samples, which is also referred to as a hybrid approach. The promising performance of the introduced method is numerically verified by testing it on a steady-state single-phase Darcy flow model with two different permeability configurations. The results are compared to the output of ensemble Kalman inversion, and Markov chain Monte Carlo methods results are computed as a benchmark.
Various particle filters have been proposed over the last couple of decades with the common feature that the update step is governed by a type of control law. This feature makes them an attractive alternative to traditional sequential Monte Carlo which scales poorly with the state dimension due to weight degeneracy. This article proposes a unifying framework that allows us to systematically derive the McKean-Vlasov representations of these filters for the discrete time and continuous time observation case, taking inspiration from the smooth approximation of the data considered in [D. Crisan and J. Xiong, Stochastics, 82 (2010), pp. 53-68; J. M. Clark and D. Crisan, Probab. Theory Related Fields, 133 (2005), pp. 43-56]. We consider three filters that have been proposed in the literature and use this framework to derive Ito representations of their limiting forms as the approximation parameter delta -> 0. All filters require the solution of a Poisson equation defined on R-d, for which existence and uniqueness of solutions can be a nontrivial issue. We additionally establish conditions on the signal-observation system that ensures well-posedness of the weighted Poisson equation arising in one of the filters.
We consider an initial problem for the Navier-Stokes type equations associated with the de Rham complex over R-n x[0, T], n >= 3, with a positive time T. We prove that the problem induces an open injective mappings on the scales of specially constructed function spaces of Bochner-Sobolev type. In particular, the corresponding statement on the intersection of these classes gives an open mapping theorem for smooth solutions to the Navier-Stokes equations.
A characterization of the essential spectrum of Schrodinger operators on infinite graphs is derived involving the concept of R-limits. This concept, which was introduced previously for operators on N and Z(d) as "right-limits," captures the behaviour of the operator at infinity. For graphs with sub-exponential growth rate, we show that each point in sigma(ss)(H) corresponds to a bounded generalized eigenfunction of a corresponding R-limit of H. If, additionally, the graph is of uniform sub-exponential growth, also the converse inclusion holds.
Sequential data assimilation of the stochastic SEIR epidemic model for regional COVID-19 dynamics
(2021)
Newly emerging pandemics like COVID-19 call for predictive models to implement precisely tuned responses to limit their deep impact on society. Standard epidemic models provide a theoretically well-founded dynamical description of disease incidence. For COVID-19 with infectiousness peaking before and at symptom onset, the SEIR model explains the hidden build-up of exposed individuals which creates challenges for containment strategies. However, spatial heterogeneity raises questions about the adequacy of modeling epidemic outbreaks on the level of a whole country. Here, we show that by applying sequential data assimilation to the stochastic SEIR epidemic model, we can capture the dynamic behavior of outbreaks on a regional level. Regional modeling, with relatively low numbers of infected and demographic noise, accounts for both spatial heterogeneity and stochasticity. Based on adapted models, short-term predictions can be achieved. Thus, with the help of these sequential data assimilation methods, more realistic epidemic models are within reach.
In a previous study, a new snapshot modeling concept for the archeomagnetic field was introduced (Mauerberger et al., 2020, ). By assuming a Gaussian process for the geomagnetic potential, a correlation-based algorithm was presented, which incorporates a closed-form spatial correlation function. This work extends the suggested modeling strategy to the temporal domain. A space-time correlation kernel is constructed from the tensor product of the closed-form spatial correlation kernel with a squared exponential kernel in time. Dating uncertainties are incorporated into the modeling concept using a noisy input Gaussian process. All but one modeling hyperparameters are marginalized, to reduce their influence on the outcome and to translate their variability to the posterior variance. The resulting distribution incorporates uncertainties related to dating, measurement and modeling process. Results from application to archeomagnetic data show less variation in the dipole than comparable models, but are in general agreement with previous findings.
Bayesian inference can be embedded into an appropriately defined dynamics in the space of probability measures. In this paper, we take Brownian motion and its associated Fokker-Planck equation as a starting point for such embeddings and explore several interacting particle approximations. More specifically, we consider both deterministic and stochastic interacting particle systems and combine them with the idea of preconditioning by the empirical covariance matrix. In addition to leading to affine invariant formulations which asymptotically speed up convergence, preconditioning allows for gradient-free implementations in the spirit of the ensemble Kalman filter. While such gradient-free implementations have been demonstrated to work well for posterior measures that are nearly Gaussian, we extend their scope of applicability to multimodal measures by introducing localized gradient-free approximations. Numerical results demonstrate the effectiveness of the considered methodologies.
In June 2018, after 4 years of cruise, the Japanese space probe Hayabusa2 [1-Watanabe S. et al.: Hayabusa2 Mission Overview. (2017)] reached the Near-Earth Asteroid (162173) Ryugu. Hayabusa2 carried a small Lander named MASCOT (Mobile Asteroid Surface Scout) [2-Ho T. M. et al.: MASCOT-The Mobile Asteroid Surface Scout onboard the Hayabusa2 mission. (2017)], jointly developed by the German Aerospace Center (DLR) and the French Space Agency (CNES), to investigate Ryugu's surface structure, composition and physical properties including its thermal behaviour and magnetization in-situ. The Microgravity User Support Centre (DLR-MUSC) in Cologne was in charge of providing all thermal conditions and constraints necessary for the selection of the final landing site and for the final operations of the Lander MASCOT on the surface of the asteroid Ryugu. This article provides a comprehensive assessment of these thermal conditions and constraints, based on predictions performed with the Thermal Mathematical Model (TMM) of MASCOT using different asteroid surface thermal models, ephemeris data for approach as well as descent and hopping trajectories, the related operation sequences and scenarios and the possible environmental conditions driven by the Hayabusa2 spacecraft. A comparison with the real telemetry data confirms the analysis and provides further information about the asteroid characteristics.
Data assimilation algorithms are used to estimate the states of a dynamical system using partial and noisy observations. The ensemble Kalman filter has become a popular data assimilation scheme due to its simplicity and robustness for a wide range of application areas. Nevertheless, this filter also has limitations due to its inherent assumptions of Gaussianity and linearity, which can manifest themselves in the form of dynamically inconsistent state estimates. This issue is investigated here for balanced, slowly evolving solutions to highly oscillatory Hamiltonian systems which are prototypical for applications in numerical weather prediction. It is demonstrated that the standard ensemble Kalman filter can lead to state estimates that do not satisfy the pertinent balance relations and ultimately lead to filter divergence. Two remedies are proposed, one in terms of blended asymptotically consistent time-stepping schemes, and one in terms of minimization-based postprocessing methods. The effects of these modifications to the standard ensemble Kalman filter are discussed and demonstrated numerically for balanced motions of two prototypical Hamiltonian reference systems.
The superposition operation S-n,S-A, n >= 1, n is an element of N, maps to each (n + 1)-tuple of n-ary operations on a set A an n-ary operation on A and satisfies the so-called superassociative law, a generalization of the associative law. The corresponding algebraic structures are Menger algebras of rank n. A partial algebra of type (n + 1) which satisfies the superassociative law as weak identity is said to be a partial Menger algebra of rank n. As a generalization of linear terms we define r-terms as terms where each variable occurs at most r-times. It will be proved that n-ary r-terms form partial Menger algebras of rank n. In this paper, some algebraic properties of partial Menger algebras such as generating systems, homomorphic images and freeness are investigated. As generalization of hypersubstitutions and linear hypersubstitutions we consider r-hypersubstitutions.U
Forecast verification
(2021)
The philosophy of forecast verification is rather different between deterministic and probabilistic verification metrics: generally speaking, deterministic metrics measure differences, whereas probabilistic metrics assess reliability and sharpness of predictive distributions. This article considers the root-mean-square error (RMSE), which can be seen as a deterministic metric, and the probabilistic metric Continuous Ranked Probability Score (CRPS), and demonstrates that under certain conditions, the CRPS can be mathematically expressed in terms of the RMSE when these metrics are aggregated. One of the required conditions is the normality of distributions. The other condition is that, while the forecast ensemble need not be calibrated, any bias or over/underdispersion cannot depend on the forecast distribution itself. Under these conditions, the CRPS is a fraction of the RMSE, and this fraction depends only on the heteroscedasticity of the ensemble spread and the measures of calibration. The derived CRPS-RMSE relationship for the case of perfect ensemble reliability is tested on simulations of idealised two-dimensional barotropic turbulence. Results suggest that the relationship holds approximately despite the normality condition not being met.
The Bayesian solution to a statistical inverse problem can be summarised by a mode of the posterior distribution, i.e. a maximum a posteriori (MAP) estimator. The MAP estimator essentially coincides with the (regularised) variational solution to the inverse problem, seen as minimisation of the Onsager-Machlup (OM) functional of the posterior measure. An open problem in the stability analysis of inverse problems is to establish a relationship between the convergence properties of solutions obtained by the variational approach and by the Bayesian approach. To address this problem, we propose a general convergence theory for modes that is based on the Gamma-convergence of OM functionals, and apply this theory to Bayesian inverse problems with Gaussian and edge-preserving Besov priors. Part II of this paper considers more general prior distributions.
We derive Onsager-Machlup functionals for countable product measures on weighted l(p) subspaces of the sequence space R-N. Each measure in the product is a shifted and scaled copy of a reference probability measure on R that admits a sufficiently regular Lebesgue density. We study the equicoercivity and Gamma-convergence of sequences of Onsager-Machlup functionals associated to convergent sequences of measures within this class. We use these results to establish analogous results for probability measures on separable Banach or Hilbert spaces, including Gaussian, Cauchy, and Besov measures with summability parameter 1 <= p <= 2. Together with part I of this paper, this provides a basis for analysis of the convergence of maximum a posteriori estimators in Bayesian inverse problems and most likely paths in transition path theory.
The Kramers problem for SDEs driven by small, accelerated Lévy noise with exponentially light jumps
(2021)
We establish Freidlin-Wentzell results for a nonlinear ordinary differential equation starting close to the stable state 0, say, subject to a perturbation by a stochastic integral which is driven by an epsilon-small and (1/epsilon)-accelerated Levy process with exponentially light jumps. For this purpose, we derive a large deviations principle for the stochastically perturbed system using the weak convergence approach developed by Budhiraja, Dupuis, Maroulas and collaborators in recent years. In the sequel, we solve the associated asymptotic first escape problem from the bounded neighborhood of 0 in the limit as epsilon -> 0 which is also known as the Kramers problem in the literature.
Androulidakis-Skandalis (2009) showed that every singular foliation has an associated topological groupoid, called holonomy groupoid. In this note, we exhibit some functorial properties of this assignment: if a foliated manifold (M, FM ) is the quotient of a foliated manifold (P, FP ) along a surjective submersion with connected fibers, then the same is true for the corresponding holonomy groupoids. For quotients by a Lie group action, an analogue statement holds under suitable assumptions, yielding a Lie 2-group action on the holonomy groupoid.
In this paper we prove a strengthening of a theorem of Chang, Weinberger and Yu on obstructions to the existence of positive scalar curvature metrics on compact manifolds with boundary. They construct a relative index for the Dirac operator, which lives in a relative K-theory group, measuring the difference between the fundamental group of the boundary and of the full manifold.
Whenever the Riemannian metric has product structure and positive scalar curvature near the boundary, one can define an absolute index of the Dirac operator taking value in the K-theory of the C*-algebra of fundamental group of the full manifold. This index depends on the metric near the boundary. We prove that (a slight variation of) the relative index of Chang, Weinberger and Yu is the image of this absolute index under the canonical map of K-theory groups.
This has the immediate corollary that positive scalar curvature on the whole manifold implies vanishing of the relative index, giving a conceptual and direct proof of the vanishing theorem of Chang, Weinberger and Yu (rather: a slight variation). To take the fundamental groups of the manifold and its boundary into account requires working with maximal C*-completions of the involved *-algebras. A significant part of this paper is devoted to foundational results regarding these completions. On the other hand, we introduce and propose a more conceptual and more geometric completion, which still has all the required functoriality.
Nonparametric goodness-of-fit testing for parametric covariate models in pharmacometric analyses
(2021)
The characterization of covariate effects on model parameters is a crucial step during pharmacokinetic/pharmacodynamic analyses. Although covariate selection criteria have been studied extensively, the choice of the functional relationship between covariates and parameters, however, has received much less attention. Often, a simple particular class of covariate-to-parameter relationships (linear, exponential, etc.) is chosen ad hoc or based on domain knowledge, and a statistical evaluation is limited to the comparison of a small number of such classes. Goodness-of-fit testing against a nonparametric alternative provides a more rigorous approach to covariate model evaluation, but no such test has been proposed so far. In this manuscript, we derive and evaluate nonparametric goodness-of-fit tests for parametric covariate models, the null hypothesis, against a kernelized Tikhonov regularized alternative, transferring concepts from statistical learning to the pharmacological setting. The approach is evaluated in a simulation study on the estimation of the age-dependent maturation effect on the clearance of a monoclonal antibody. Scenarios of varying data sparsity and residual error are considered. The goodness-of-fit test correctly identified misspecified parametric models with high power for relevant scenarios. The case study provides proof-of-concept of the feasibility of the proposed approach, which is envisioned to be beneficial for applications that lack well-founded covariate models.
The geomagnetic Kp index is one of the most extensively used indices of geomagnetic activity, both for scientific and operational purposes. This article reviews the properties of the Kp index and provides a reference for users of the Kp index and associated data products as derived and distributed by the GFZ German Research Centre for Geosciences. The near real-time production of the nowcast Kp index is of particular interest for space weather services and here we describe and evaluate its current setup.
Analysis of protrusion dynamics in amoeboid cell motility by means of regularized contour flows
(2021)
Amoeboid cell motility is essential for a wide range of biological processes including wound healing, embryonic morphogenesis, and cancer metastasis. It relies on complex dynamical patterns of cell shape changes that pose long-standing challenges to mathematical modeling and raise a need for automated and reproducible approaches to extract quantitative morphological features from image sequences. Here, we introduce a theoretical framework and a computational method for obtaining smooth representations of the spatiotemporal contour dynamics from stacks of segmented microscopy images. Based on a Gaussian process regression we propose a one-parameter family of regularized contour flows that allows us to continuously track reference points (virtual markers) between successive cell contours. We use this approach to define a coordinate system on the moving cell boundary and to represent different local geometric quantities in this frame of reference. In particular, we introduce the local marker dispersion as a measure to identify localized membrane expansions and provide a fully automated way to extract the properties of such expansions, including their area and growth time. The methods are available as an open-source software package called AmoePy, a Python-based toolbox for analyzing amoeboid cell motility (based on time-lapse microscopy data), including a graphical user interface and detailed documentation. Due to the mathematical rigor of our framework, we envision it to be of use for the development of novel cell motility models. We mainly use experimental data of the social amoeba Dictyostelium discoideum to illustrate and validate our approach. <br /> Author summary Amoeboid motion is a crawling-like cell migration that plays an important key role in multiple biological processes such as wound healing and cancer metastasis. This type of cell motility results from expanding and simultaneously contracting parts of the cell membrane. From fluorescence images, we obtain a sequence of points, representing the cell membrane, for each time step. By using regression analysis on these sequences, we derive smooth representations, so-called contours, of the membrane. Since the number of measurements is discrete and often limited, the question is raised of how to link consecutive contours with each other. In this work, we present a novel mathematical framework in which these links are described by regularized flows allowing a certain degree of concentration or stretching of neighboring reference points on the same contour. This stretching rate, the so-called local dispersion, is used to identify expansions and contractions of the cell membrane providing a fully automated way of extracting properties of these cell shape changes. We applied our methods to time-lapse microscopy data of the social amoeba Dictyostelium discoideum.
We prove a homology vanishing theorem for graphs with positive Bakry-' Emery curvature, analogous to a classic result of Bochner on manifolds [3]. Specifically, we prove that if a graph has positive curvature at every vertex, then its first homology group is trivial, where the notion of homology that we use for graphs is the path homology developed by Grigor'yan, Lin, Muranov, and Yau [11]. We moreover prove that the fundamental group is finite for graphs with positive Bakry-' Emery curvature, analogous to a classic result of Myers on manifolds [22]. The proofs draw on several separate areas of graph theory, including graph coverings, gain graphs, and cycle spaces, in addition to the Bakry-Emery curvature, path homology, and graph homotopy. The main results follow as a consequence of several different relationships developed among these different areas. Specifically, we show that a graph with positive curvature cannot have a non-trivial infinite cover preserving 3-cycles and 4-cycles, and give a combinatorial interpretation of the first path homology in terms of the cycle space of a graph. Furthermore, we relate gain graphs to graph homotopy and the fundamental group developed by Grigor'yan, Lin, Muranov, and Yau [12], and obtain an alternative proof of their result that the abelianization of the fundamental group of a graph is isomorphic to the first path homology over the integers.
Transition path theory (TPT) for diffusion processes is a framework for analyzing the transitions of multiscale ergodic diffusion processes between disjoint metastable subsets of state space. Most methods for applying TPT involve the construction of a Markov state model on a discretization of state space that approximates the underlying diffusion process. However, the assumption of Markovianity is difficult to verify in practice, and there are to date no known error bounds or convergence results for these methods. We propose a Monte Carlo method for approximating the forward committor, probability current, and streamlines from TPT for diffusion processes. Our method uses only sample trajectory data and partitions of state space based on Voronoi tessellations. It does not require the construction of a Markovian approximating process. We rigorously prove error bounds for the approximate TPT objects and use these bounds to show convergence to their exact counterparts in the limit of arbitrarily fine discretization. We illustrate some features of our method by application to a process that solves the Smoluchowski equation on a triple-well potential.
A sufficient quantitative understanding of aluminium (Al) toxicokinetics (TK) in man is still lacking, although highly desirable for risk assessment of Al exposure. Baseline exposure and the risk of contamination severely limit the feasibility of TK studies administering the naturally occurring isotope Al-27, both in animals and man. These limitations are absent in studies with Al-26 as a tracer, but tissue data are limited to animal studies. A TK model capable of inter-species translation to make valid predictions of Al levels in humans-especially in toxicological relevant tissues like bone and brain-is urgently needed. Here, we present: (i) a curated dataset which comprises all eligible studies with single doses of Al-26 tracer administered as citrate or chloride salts orally and/or intravenously to rats and humans, including ultra-long-term kinetic profiles for plasma, blood, liver, spleen, muscle, bone, brain, kidney, and urine up to 150 weeks; and (ii) the development of a physiology-based (PB) model for Al TK after intravenous and oral administration of aqueous Al citrate and Al chloride solutions in rats and humans. Based on the comprehensive curated Al-26 dataset, we estimated substance-dependent parameters within a non-linear mixed-effect modelling context. The model fitted the heterogeneous Al-26 data very well and was successfully validated against datasets in rats and humans. The presented PBTK model for Al, based on the most extensive and diverse dataset of Al exposure to date, constitutes a major advancement in the field, thereby paving the way towards a more quantitative risk assessment in humans.
Partial clones
(2020)
A set C of operations defined on a nonempty set A is said to be a clone if C is closed under composition of operations and contains all projection mappings. The concept of a clone belongs to the algebraic main concepts and has important applications in Computer Science. A clone can also be regarded as a many-sorted algebra where the sorts are the n-ary operations defined on set A for all natural numbers n >= 1 and the operations are the so-called superposition operations S-m(n) for natural numbers m, n >= 1 and the projection operations as nullary operations. Clones generalize monoids of transformations defined on set A and satisfy three clone axioms. The most important axiom is the superassociative law, a generalization of the associative law. If the superposition operations are partial, i.e. not everywhere defined, instead of the many-sorted clone algebra, one obtains partial many-sorted algebras, the partial clones. Linear terms, linear tree languages or linear formulas form partial clones. In this paper, we give a survey on partial clones and their properties.
Classic inversion methods adjust a model with a predefined number of parameters to the observed data. With transdimensional inversion algorithms such as the reversible-jump Markov chain Monte Carlo (rjMCMC), it is possible to vary this number during the inversion and to interpret the observations in a more flexible way. Geoscience imaging applications use this behaviour to automatically adjust model resolution to the inhomogeneities of the investigated system, while keeping the model parameters on an optimal level. The rjMCMC algorithm produces an ensemble as result, a set of model realizations, which together represent the posterior probability distribution of the investigated problem. The realizations are evolved via sequential updates from a randomly chosen initial solution and converge toward the target posterior distribution of the inverse problem. Up to a point in the chain, the realizations may be strongly biased by the initial model, and must be discarded from the final ensemble. With convergence assessment techniques, this point in the chain can be identified. Transdimensional MCMC methods produce ensembles that are not suitable for classic convergence assessment techniques because of the changes in parameter numbers. To overcome this hurdle, three solutions are introduced to convert model realizations to a common dimensionality while maintaining the statistical characteristics of the ensemble. A scalar, a vector and a matrix representation for models is presented, inferred from tomographic subsurface investigations, and three classic convergence assessment techniques are applied on them. It is shown that appropriately chosen scalar conversions of the models could retain similar statistical ensemble properties as geologic projections created by rasterization.
We study the Cauchy problem for a nonlinear elliptic equation with data on a piece S of the boundary surface partial derivative X. By the Cauchy problem is meant any boundary value problem for an unknown function u in a domain X with the property that the data on S, if combined with the differential equations in X, allows one to determine all derivatives of u on S by means of functional equations. In the case of real analytic data of the Cauchy problem, the existence of a local solution near S is guaranteed by the Cauchy-Kovalevskaya theorem. We discuss a variational setting of the Cauchy problem which always possesses a generalized solution.
We consider a perturbation of the de Rham complex on a compact manifold with boundary. This perturbation goes beyond the framework of complexes, and so cohomology does not apply to it. On the other hand, its curvature is "small", hence there is a natural way to introduce an Euler characteristic and develop a Lefschetz theory for the perturbation. This work is intended as an attempt to develop a cohomology theory for arbitrary sequences of linear mappings.
The Coulomb failure stress (CFS) criterion is the most commonly used method for predicting spatial distributions of aftershocks following large earthquakes. However, large uncertainties are always associated with the calculation of Coulomb stress change. The uncertainties mainly arise due to nonunique slip inversions and unknown receiver faults; especially for the latter, results are highly dependent on the choice of the assumed receiver mechanism. Based on binary tests (aftershocks yes/no), recent studies suggest that alternative stress quantities, a distance-slip probabilistic model as well as deep neural network (DNN) approaches, all are superior to CFS with predefined receiver mechanism. To challenge this conclusion, which might have large implications, we use 289 slip inversions from SRCMOD database to calculate more realistic CFS values for a layered half-space and variable receiver mechanisms. We also analyze the effect of the magnitude cutoff, grid size variation, and aftershock duration to verify the use of receiver operating characteristic (ROC) analysis for the ranking of stress metrics. The observations suggest that introducing a layered half-space does not improve the stress maps and ROC curves. However, results significantly improve for larger aftershocks and shorter time periods but without changing the ranking. We also go beyond binary testing and apply alternative statistics to test the ability to estimate aftershock numbers, which confirm that simple stress metrics perform better than the classic Coulomb failure stress calculations and are also better than the distance-slip probabilistic model.
We study those nonlinear partial differential equations which appear as Euler-Lagrange equations of variational problems. On defining weak boundary values of solutions to such equations we initiate the theory of Lagrangian boundary value problems in spaces of appropriate smoothness. We also analyse if the concept of mapping degree of current importance applies to Lagrangian problems.
The study of the Cauchy problem for solutions of the heat equation in a cylindrical domain with data on the lateral surface by the Fourier method raises the problem of calculating the inverse Laplace transform of the entire function cos root z. This problem has no solution in the standard theory of the Laplace transform. We give an explicit formula for the inverse Laplace transform of cos root z using the theory of analytic functionals. This solution suits well to efficiently develop the regularization of solutions to Cauchy problems for parabolic equations with data on noncharacteristic surfaces.
We study the asymptotics of solutions to the Dirichlet problem in a domain X subset of R3 whose boundary contains a singular point O. In a small neighborhood of this point, the domain has the form {z > root x(2) + y(4)}, i.e., the origin is a nonsymmetric conical point at the boundary. So far, the behavior of solutions to elliptic boundary-value problems has not been studied sufficiently in the case of nonsymmetric singular points. This problem was posed by V.A. Kondrat'ev in 2000. We establish a complete asymptotic expansion of solutions near the singular point.
Arborified zeta values are defined as iterated series and integrals using the universal properties of rooted trees. This approach allows to study their convergence domain and to relate them to multiple zeta values. Generalisations to rooted trees of the stuffle and shuffle products are defined and studied. It is further shown that arborified zeta values are algebra morphisms for these new products on trees.
Thermophysical modelling and parameter estimation of small solar system bodies via data assimilation
(2020)
Deriving thermophysical properties such as thermal inertia from thermal infrared observations provides useful insights into the structure of the surface material on planetary bodies. The estimation of these properties is usually done by fitting temperature variations calculated by thermophysical models to infrared observations. For multiple free model parameters, traditional methods such as least-squares fitting or Markov chain Monte Carlo methods become computationally too expensive. Consequently, the simultaneous estimation of several thermophysical parameters, together with their corresponding uncertainties and correlations, is often not computationally feasible and the analysis is usually reduced to fitting one or two parameters. Data assimilation (DA) methods have been shown to be robust while sufficiently accurate and computationally affordable even for a large number of parameters. This paper will introduce a standard sequential DA method, the ensemble square root filter, for thermophysical modelling of asteroid surfaces. This method is used to re-analyse infrared observations of the MARA instrument, which measured the diurnal temperature variation of a single boulder on the surface of near-Earth asteroid (162173) Ryugu. The thermal inertia is estimated to be 295 +/- 18 Jm(-2) K-1 s(-1/2), while all five free parameters of the initial analysis are varied and estimated simultaneously. Based on this thermal inertia estimate the thermal conductivity of the boulder is estimated to be between 0.07 and 0.12,Wm(-1) K-1 and the porosity to be between 0.30 and 0.52. For the first time in thermophysical parameter derivation, correlations and uncertainties of all free model parameters are incorporated in the estimation procedure that is more than 5000 times more efficient than a comparable parameter sweep.
We extend our approach of asymptotic parametrix construction for Hamiltonian operators from conical to edge-type singularities which is applicable to coalescence points of two particles of the helium atom and related two electron systems including the hydrogen molecule. Up to second-order, we have calculated the symbols of an asymptotic parametrix of the nonrelativistic Hamiltonian of the helium atom within the Born-Oppenheimer approximation and provide explicit formulas for the corresponding Green operators which encode the asymptotic behavior of the eigenfunctions near an edge.
When trying to cast the free fermion in the framework of functorial field theory, its chiral anomaly manifests in the fact that it assigns the determinant of the Dirac operator to a top-dimensional closed spin manifold, which is not a number as expected, but an element of a complex line. In functorial field theory language, this means that the theory is twisted, which gives rise to an anomaly theory. In this paper, we give a detailed construction of this anomaly theory, as a functor that sends manifolds to infinite-dimensional Clifford algebras and bordisms to bimodules.
Author summary <br /> The use of orally inhaled drugs for treating lung diseases is appealing since they have the potential for lung selectivity, i.e. high exposure at the site of action -the lung- without excessive side effects. However, the degree of lung selectivity depends on a large number of factors, including physiochemical properties of drug molecules, patient disease state, and inhalation devices. To predict the impact of these factors on drug exposure and thereby to understand the characteristics of an optimal drug for inhalation, we develop a predictive mathematical framework (a "pharmacokinetic model"). In contrast to previous approaches, our model allows combining knowledge from different sources appropriately and its predictions were able to adequately predict different sets of clinical data. Finally, we compare the impact of different factors and find that the most important factors are the size of the inhaled particles, the affinity of the drug to the lung tissue, as well as the rate of drug dissolution in the lung. In contrast to the common belief, the solubility of a drug in the lining fluids is not found to be relevant. These findings are important to understand how inhaled drugs should be designed to achieve best treatment results in patients. <br /> The fate of orally inhaled drugs is determined by pulmonary pharmacokinetic processes such as particle deposition, pulmonary drug dissolution, and mucociliary clearance. Even though each single process has been systematically investigated, a quantitative understanding on the interaction of processes remains limited and therefore identifying optimal drug and formulation characteristics for orally inhaled drugs is still challenging. To investigate this complex interplay, the pulmonary processes can be integrated into mathematical models. However, existing modeling attempts considerably simplify these processes or are not systematically evaluated against (clinical) data. In this work, we developed a mathematical framework based on physiologically-structured population equations to integrate all relevant pulmonary processes mechanistically. A tailored numerical resolution strategy was chosen and the mechanistic model was evaluated systematically against data from different clinical studies. Without adapting the mechanistic model or estimating kinetic parameters based on individual study data, the developed model was able to predict simultaneously (i) lung retention profiles of inhaled insoluble particles, (ii) particle size-dependent pharmacokinetics of inhaled monodisperse particles, (iii) pharmacokinetic differences between inhaled fluticasone propionate and budesonide, as well as (iv) pharmacokinetic differences between healthy volunteers and asthmatic patients. Finally, to identify the most impactful optimization criteria for orally inhaled drugs, the developed mechanistic model was applied to investigate the impact of input parameters on both the pulmonary and systemic exposure. Interestingly, the solubility of the inhaled drug did not have any relevant impact on the local and systemic pharmacokinetics. Instead, the pulmonary dissolution rate, the particle size, the tissue affinity, and the systemic clearance were the most impactful potential optimization parameters. In the future, the developed prediction framework should be considered a powerful tool for identifying optimal drug and formulation characteristics.
Large emissions
(2020)
Pinned Gibbs processes
(2020)
We construct marked Gibbs point processes in R-d under quite general assumptions. Firstly, we allow for interaction functionals that may be unbounded and whose range is not assumed to be uniformly bounded. Indeed, our typical interaction admits an a.s. finite but random range. Secondly, the random marks-attached to the locations in R-d-belong to a general normed space G. They are not bounded, but their law should admit a super-exponential moment. The approach used here relies on the so-called entropy method and large-deviation tools in order to prove tightness of a family of finite-volume Gibbs point processes. An application to infinite-dimensional interacting diffusions is also presented.
The IGRF offers an important incentive for testing algorithms predicting the Earth's magnetic field changes, known as secular variation (SV), in a 5-year range. Here, we present a SV candidate model for the 13th IGRF that stems from a sequential ensemble data assimilation approach (EnKF). The ensemble consists of a number of parallel-running 3D-dynamo simulations. The assimilated data are geomagnetic field snapshots covering the years 1840 to 2000 from the COV-OBS.x1 model and for 2001 to 2020 from the Kalmag model. A spectral covariance localization method, considering the couplings between spherical harmonics of the same equatorial symmetry and same azimuthal wave number, allows decreasing the ensemble size to about a 100 while maintaining the stability of the assimilation. The quality of 5-year predictions is tested for the past two decades. These tests show that the assimilation scheme is able to reconstruct the overall SV evolution. They also suggest that a better 5-year forecast is obtained keeping the SV constant compared to the dynamically evolving SV. However, the quality of the dynamical forecast steadily improves over the full assimilation window (180 years). We therefore propose the instantaneous SV estimate for 2020 from our assimilation as a candidate model for the IGRF-13. The ensemble approach provides uncertainty estimates, which closely match the residual differences with respect to the IGRF-13. Longer term predictions for the evolution of the main magnetic field features over a 50-year range are also presented. We observe the further decrease of the axial dipole at a mean rate of 8 nT/year as well as a deepening and broadening of the South Atlantic Anomaly. The magnetic dip poles are seen to approach an eccentric dipole configuration.
We propose a computational method (with acronym ALDI) for sampling from a given target distribution based on first-order (overdamped) Langevin dynamics which satisfies the property of affine invariance. The central idea of ALDI is to run an ensemble of particles with their empirical covariance serving as a preconditioner for their underlying Langevin dynamics. ALDI does not require taking the inverse or square root of the empirical covariance matrix, which enables application to high-dimensional sampling problems. The theoretical properties of ALDI are studied in terms of nondegeneracy and ergodicity. Furthermore, we study its connections to diffusion on Riemannian manifolds and Wasserstein gradient flows. Bayesian inference serves as a main application area for ALDI. In case of a forward problem with additive Gaussian measurement errors, ALDI allows for a gradient-free approximation in the spirit of the ensemble Kalman filter. A computational comparison between gradient-free and gradient-based ALDI is provided for a PDE constrained Bayesian inverse problem.
Understanding the macroscopic behavior of dynamical systems is an important tool to unravel transport mechanisms in complex flows. A decomposition of the state space into coherent sets is a popular way to reveal this essential macroscopic evolution. To compute coherent sets from an aperiodic time-dependent dynamical system we consider the relevant transfer operators and their infinitesimal generators on an augmented space-time manifold. This space-time generator approach avoids trajectory integration and creates a convenient linearization of the aperiodic evolution. This linearization can be further exploited to create a simple and effective spectral optimization methodology for diminishing or enhancing coherence. We obtain explicit solutions for these optimization problems using Lagrange multipliers and illustrate this technique by increasing and decreasing mixing of spatial regions through small velocity field perturbations.
Tikhonov regularization with oversmoothing penalty for nonlinear statistical inverse problems
(2020)
In this paper, we consider the nonlinear ill-posed inverse problem with noisy data in the statistical learning setting. The Tikhonov regularization scheme in Hilbert scales is considered to reconstruct the estimator from the random noisy data. In this statistical learning setting, we derive the rates of convergence for the regularized solution under certain assumptions on the nonlinear forward operator and the prior assumptions. We discuss estimates of the reconstruction error using the approach of reproducing kernel Hilbert spaces.
Let D be a division ring of fractions of a crossed product F[G, eta, alpha], where F is a skew field and G is a group with Conradian left-order <=. For D we introduce the notion of freeness with respect to <= and show that D is free in this sense if and only if D can canonically be embedded into the endomorphism ring of the right F-vector space F((G)) of all formal power series in G over F with respect to <=. From this we obtain that all division rings of fractions of F[G, eta, alpha] which are free with respect to at least one Conradian left-order of G are isomorphic and that they are free with respect to any Conradian left-order of G. Moreover, F[G, eta, alpha] possesses a division ring of fraction which is free in this sense if and only if the rational closure of F[G, eta, alpha] in the endomorphism ring of the corresponding right F-vector space F((G)) is a skew field.
In the limit (h) over bar -> 0, we analyze a class of Schrödinger operators H-(h) over bar = (h) over bar L-2 + (h) over barW + V .id(epsilon) acting on sections of a vector bundle epsilon over a Riemannian manifold M where L is a Laplace type operator, W is an endomorphism field and the potential energy V has a non-degenerate minimum at some point p is an element of M. We construct quasimodes of WKB-type near p for eigenfunctions associated with the low-lying eigenvalues of H-(h) over bar. These are obtained from eigenfunctions of the associated harmonic oscillator H-p,H-(h) over bar at p, acting on smooth functions on the tangent space.
Interacting particle solutions of Fokker–Planck equations through gradient–log–density estimation
(2020)
Fokker-Planck equations are extensively employed in various scientific fields as they characterise the behaviour of stochastic systems at the level of probability density functions. Although broadly used, they allow for analytical treatment only in limited settings, and often it is inevitable to resort to numerical solutions. Here, we develop a computational approach for simulating the time evolution of Fokker-Planck solutions in terms of a mean field limit of an interacting particle system. The interactions between particles are determined by the gradient of the logarithm of the particle density, approximated here by a novel statistical estimator. The performance of our method shows promising results, with more accurate and less fluctuating statistics compared to direct stochastic simulations of comparable particle number. Taken together, our framework allows for effortless and reliable particle-based simulations of Fokker-Planck equations in low and moderate dimensions. The proposed gradient-log-density estimator is also of independent interest, for example, in the context of optimal control.
We consider rough metrics on smooth manifolds and corresponding Laplacians induced by such metrics. We demonstrate that globally continuous heat kernels exist and are Holder continuous locally in space and time. This is done via local parabolic Harnack estimates for weak solutions of operators in divergence form with bounded measurable coefficients in weighted Sobolev spaces.
The canonical trace and the Wodzicki residue on classical pseudo-differential operators on a closed manifold are characterised by their locality and shown to be preserved under lifting to the universal covering as a result of their local feature. As a consequence, we lift a class of spectral zeta-invariants using lifted defect formulae which express discrepancies of zeta-regularised traces in terms of Wodzicki residues. We derive Atiyah's L-2-index theorem as an instance of the Z(2)-graded generalisation of the canonical lift of spectral zeta-invariants and we show that certain lifted spectral zeta-invariants for geometric operators are integrals of Pontryagin and Chern forms.
We investigate if kernel regularization methods can achieve minimax convergence rates over a source condition regularity assumption for the target function. These questions have been considered in past literature, but only under specific assumptions about the decay, typically polynomial, of the spectrum of the the kernel mapping covariance operator. In the perspective of distribution-free results, we investigate this issue under much weaker assumption on the eigenvalue decay, allowing for more complex behavior that can reflect different structure of the data at different scales.
LetH be a Schrodinger operator defined on a noncompact Riemannianmanifold Omega, and let W is an element of L-infinity (Omega; R). Suppose that the operator H + W is critical in Omega, and let phi be the corresponding Agmon ground state. We prove that if u is a generalized eigenfunction ofH satisfying vertical bar u vertical bar <= C-phi in Omega for some constant C > 0, then the corresponding eigenvalue is in the spectrum of H. The conclusion also holds true if for some K is an element of Omega the operator H admits a positive solution in (Omega) over bar = Omega \ K, and vertical bar u vertical bar <= C psi in (Omega) over bar for some constant C > 0, where psi is a positive solution of minimal growth in a neighborhood of infinity in Omega. Under natural assumptions, this result holds also in the context of infinite graphs, and Dirichlet forms.