Institut für Mathematik
Refine
Year of publication
Document Type
- Article (1078)
- Monograph/Edited Volume (427)
- Preprint (378)
- Doctoral Thesis (151)
- Other (46)
- Postprint (32)
- Review (16)
- Conference Proceeding (9)
- Master's Thesis (7)
- Part of a Book (3)
Language
- English (1874)
- German (265)
- French (7)
- Italian (3)
- Multiple languages (1)
Keywords
- random point processes (19)
- statistical mechanics (19)
- stochastic analysis (19)
- index (14)
- Fredholm property (12)
- boundary value problems (12)
- cluster expansion (10)
- data assimilation (10)
- regularization (10)
- elliptic operators (9)
Institute
- Institut für Mathematik (2150) (remove)
Large emissions
(2020)
Pinned Gibbs processes
(2020)
We construct marked Gibbs point processes in R-d under quite general assumptions. Firstly, we allow for interaction functionals that may be unbounded and whose range is not assumed to be uniformly bounded. Indeed, our typical interaction admits an a.s. finite but random range. Secondly, the random marks-attached to the locations in R-d-belong to a general normed space G. They are not bounded, but their law should admit a super-exponential moment. The approach used here relies on the so-called entropy method and large-deviation tools in order to prove tightness of a family of finite-volume Gibbs point processes. An application to infinite-dimensional interacting diffusions is also presented.
The IGRF offers an important incentive for testing algorithms predicting the Earth's magnetic field changes, known as secular variation (SV), in a 5-year range. Here, we present a SV candidate model for the 13th IGRF that stems from a sequential ensemble data assimilation approach (EnKF). The ensemble consists of a number of parallel-running 3D-dynamo simulations. The assimilated data are geomagnetic field snapshots covering the years 1840 to 2000 from the COV-OBS.x1 model and for 2001 to 2020 from the Kalmag model. A spectral covariance localization method, considering the couplings between spherical harmonics of the same equatorial symmetry and same azimuthal wave number, allows decreasing the ensemble size to about a 100 while maintaining the stability of the assimilation. The quality of 5-year predictions is tested for the past two decades. These tests show that the assimilation scheme is able to reconstruct the overall SV evolution. They also suggest that a better 5-year forecast is obtained keeping the SV constant compared to the dynamically evolving SV. However, the quality of the dynamical forecast steadily improves over the full assimilation window (180 years). We therefore propose the instantaneous SV estimate for 2020 from our assimilation as a candidate model for the IGRF-13. The ensemble approach provides uncertainty estimates, which closely match the residual differences with respect to the IGRF-13. Longer term predictions for the evolution of the main magnetic field features over a 50-year range are also presented. We observe the further decrease of the axial dipole at a mean rate of 8 nT/year as well as a deepening and broadening of the South Atlantic Anomaly. The magnetic dip poles are seen to approach an eccentric dipole configuration.
We propose a computational method (with acronym ALDI) for sampling from a given target distribution based on first-order (overdamped) Langevin dynamics which satisfies the property of affine invariance. The central idea of ALDI is to run an ensemble of particles with their empirical covariance serving as a preconditioner for their underlying Langevin dynamics. ALDI does not require taking the inverse or square root of the empirical covariance matrix, which enables application to high-dimensional sampling problems. The theoretical properties of ALDI are studied in terms of nondegeneracy and ergodicity. Furthermore, we study its connections to diffusion on Riemannian manifolds and Wasserstein gradient flows. Bayesian inference serves as a main application area for ALDI. In case of a forward problem with additive Gaussian measurement errors, ALDI allows for a gradient-free approximation in the spirit of the ensemble Kalman filter. A computational comparison between gradient-free and gradient-based ALDI is provided for a PDE constrained Bayesian inverse problem.
Understanding the macroscopic behavior of dynamical systems is an important tool to unravel transport mechanisms in complex flows. A decomposition of the state space into coherent sets is a popular way to reveal this essential macroscopic evolution. To compute coherent sets from an aperiodic time-dependent dynamical system we consider the relevant transfer operators and their infinitesimal generators on an augmented space-time manifold. This space-time generator approach avoids trajectory integration and creates a convenient linearization of the aperiodic evolution. This linearization can be further exploited to create a simple and effective spectral optimization methodology for diminishing or enhancing coherence. We obtain explicit solutions for these optimization problems using Lagrange multipliers and illustrate this technique by increasing and decreasing mixing of spatial regions through small velocity field perturbations.
Tikhonov regularization with oversmoothing penalty for nonlinear statistical inverse problems
(2020)
In this paper, we consider the nonlinear ill-posed inverse problem with noisy data in the statistical learning setting. The Tikhonov regularization scheme in Hilbert scales is considered to reconstruct the estimator from the random noisy data. In this statistical learning setting, we derive the rates of convergence for the regularized solution under certain assumptions on the nonlinear forward operator and the prior assumptions. We discuss estimates of the reconstruction error using the approach of reproducing kernel Hilbert spaces.
Let D be a division ring of fractions of a crossed product F[G, eta, alpha], where F is a skew field and G is a group with Conradian left-order <=. For D we introduce the notion of freeness with respect to <= and show that D is free in this sense if and only if D can canonically be embedded into the endomorphism ring of the right F-vector space F((G)) of all formal power series in G over F with respect to <=. From this we obtain that all division rings of fractions of F[G, eta, alpha] which are free with respect to at least one Conradian left-order of G are isomorphic and that they are free with respect to any Conradian left-order of G. Moreover, F[G, eta, alpha] possesses a division ring of fraction which is free in this sense if and only if the rational closure of F[G, eta, alpha] in the endomorphism ring of the corresponding right F-vector space F((G)) is a skew field.
In the limit (h) over bar -> 0, we analyze a class of Schrödinger operators H-(h) over bar = (h) over bar L-2 + (h) over barW + V .id(epsilon) acting on sections of a vector bundle epsilon over a Riemannian manifold M where L is a Laplace type operator, W is an endomorphism field and the potential energy V has a non-degenerate minimum at some point p is an element of M. We construct quasimodes of WKB-type near p for eigenfunctions associated with the low-lying eigenvalues of H-(h) over bar. These are obtained from eigenfunctions of the associated harmonic oscillator H-p,H-(h) over bar at p, acting on smooth functions on the tangent space.
Interacting particle solutions of Fokker–Planck equations through gradient–log–density estimation
(2020)
Fokker-Planck equations are extensively employed in various scientific fields as they characterise the behaviour of stochastic systems at the level of probability density functions. Although broadly used, they allow for analytical treatment only in limited settings, and often it is inevitable to resort to numerical solutions. Here, we develop a computational approach for simulating the time evolution of Fokker-Planck solutions in terms of a mean field limit of an interacting particle system. The interactions between particles are determined by the gradient of the logarithm of the particle density, approximated here by a novel statistical estimator. The performance of our method shows promising results, with more accurate and less fluctuating statistics compared to direct stochastic simulations of comparable particle number. Taken together, our framework allows for effortless and reliable particle-based simulations of Fokker-Planck equations in low and moderate dimensions. The proposed gradient-log-density estimator is also of independent interest, for example, in the context of optimal control.
We consider rough metrics on smooth manifolds and corresponding Laplacians induced by such metrics. We demonstrate that globally continuous heat kernels exist and are Holder continuous locally in space and time. This is done via local parabolic Harnack estimates for weak solutions of operators in divergence form with bounded measurable coefficients in weighted Sobolev spaces.
The canonical trace and the Wodzicki residue on classical pseudo-differential operators on a closed manifold are characterised by their locality and shown to be preserved under lifting to the universal covering as a result of their local feature. As a consequence, we lift a class of spectral zeta-invariants using lifted defect formulae which express discrepancies of zeta-regularised traces in terms of Wodzicki residues. We derive Atiyah's L-2-index theorem as an instance of the Z(2)-graded generalisation of the canonical lift of spectral zeta-invariants and we show that certain lifted spectral zeta-invariants for geometric operators are integrals of Pontryagin and Chern forms.
We investigate if kernel regularization methods can achieve minimax convergence rates over a source condition regularity assumption for the target function. These questions have been considered in past literature, but only under specific assumptions about the decay, typically polynomial, of the spectrum of the the kernel mapping covariance operator. In the perspective of distribution-free results, we investigate this issue under much weaker assumption on the eigenvalue decay, allowing for more complex behavior that can reflect different structure of the data at different scales.
LetH be a Schrodinger operator defined on a noncompact Riemannianmanifold Omega, and let W is an element of L-infinity (Omega; R). Suppose that the operator H + W is critical in Omega, and let phi be the corresponding Agmon ground state. We prove that if u is a generalized eigenfunction ofH satisfying vertical bar u vertical bar <= C-phi in Omega for some constant C > 0, then the corresponding eigenvalue is in the spectrum of H. The conclusion also holds true if for some K is an element of Omega the operator H admits a positive solution in (Omega) over bar = Omega \ K, and vertical bar u vertical bar <= C psi in (Omega) over bar for some constant C > 0, where psi is a positive solution of minimal growth in a neighborhood of infinity in Omega. Under natural assumptions, this result holds also in the context of infinite graphs, and Dirichlet forms.
This paper further improves the Lie group method with Magnus expansion proposed in a previous paper by the authors, to solve some types of direct singular Sturm-Liouville problems. Next, a concrete implementation to the inverse Sturm-Liouville problem algorithm proposed by Barcilon (1974) is provided. Furthermore, computational feasibility and applicability of this algorithm to solve inverse Sturm-Liouville problems of higher order (for n=2,4) are verified successfully. It is observed that the method is successful even in the presence of significant noise, provided that the assumptions of the algorithm are satisfied. In conclusion, this work provides a method that can be adapted successfully for solving a direct (regular/singular) or inverse Sturm-Liouville problem (SLP) of an arbitrary order with arbitrary boundary conditions.
The estimation of a log-concave density on R is a canonical problem in the area of shape-constrained nonparametric inference. We present a Bayesian nonparametric approach to this problem based on an exponentiated Dirichlet process mixture prior and show that the posterior distribution converges to the log-concave truth at the (near-) minimax rate in Hellinger distance. Our proof proceeds by establishing a general contraction result based on the log-concave maximum likelihood estimator that prevents the need for further metric entropy calculations. We further present computationally more feasible approximations and both an empirical and hierarchical Bayes approach. All priors are illustrated numerically via simulations.
Im Jahre 1960 behauptete Yamabe folgende Aussage bewiesen zu haben: Auf jeder kompakten Riemannschen Mannigfaltigkeit (M,g) der Dimension n ≥ 3 existiert eine zu g konform äquivalente Metrik mit konstanter Skalarkrümmung. Diese Aussage ist äquivalent zur Existenz einer Lösung einer bestimmten semilinearen elliptischen Differentialgleichung, der Yamabe-Gleichung. 1968 fand Trudinger einen Fehler in seinem Beweis und infolgedessen beschäftigten sich viele Mathematiker mit diesem nach Yamabe benannten Yamabe-Problem. In den 80er Jahren konnte durch die Arbeiten von Trudinger, Aubin und Schoen gezeigt werden, dass diese Aussage tatsächlich zutrifft. Dadurch ergeben sich viele Vorteile, z.B. kann beim Analysieren von konform invarianten partiellen Differentialgleichungen auf kompakten Riemannschen Mannigfaltigkeiten die Skalarkrümmung als konstant vorausgesetzt werden.
Es stellt sich nun die Frage, ob die entsprechende Aussage auch auf Lorentz-Mannigfaltigkeiten gilt. Das Lorentz'sche Yamabe Problem lautet somit: Existiert zu einer gegebenen räumlich kompakten global-hyperbolischen Lorentz-Mannigfaltigkeit (M,g) eine zu g konform äquivalente Metrik mit konstanter Skalarkrümmung? Das Ziel dieser Arbeit ist es, dieses Problem zu untersuchen.
Bei der sich aus dieser Fragestellung ergebenden Yamabe-Gleichung handelt es sich um eine semilineare Wellengleichung, deren Lösung eine positive glatte Funktion ist und aus der sich der konforme Faktor ergibt. Um die für die Behandlung des Yamabe-Problems benötigten Grundlagen so allgemein wie möglich zu halten, wird im ersten Teil dieser Arbeit die lokale Existenztheorie für beliebige semilineare Wellengleichungen für Schnitte auf Vektorbündeln im Rahmen eines Cauchy-Problems entwickelt. Hierzu wird der Umkehrsatz für Banachräume angewendet, um mithilfe von bereits existierenden Existenzergebnissen zu linearen Wellengleichungen, Existenzaussagen zu semilinearen Wellengleichungen machen zu können. Es wird bewiesen, dass, falls die Nichtlinearität bestimmte Bedingungen erfüllt, eine fast zeitglobale Lösung des Cauchy-Problems für kleine Anfangsdaten sowie eine zeitlokale Lösung für beliebige Anfangsdaten existiert.
Der zweite Teil der Arbeit befasst sich mit der Yamabe-Gleichung auf global-hyperbolischen Lorentz-Mannigfaltigkeiten. Zuerst wird gezeigt, dass die Nichtlinearität der Yamabe-Gleichung die geforderten Bedingungen aus dem ersten Teil erfüllt, so dass, falls die Skalarkrümmung der gegebenen Metrik nahe an einer Konstanten liegt, kleine Anfangsdaten existieren, so dass die Yamabe-Gleichung eine fast zeitglobale Lösung besitzt. Mithilfe von Energieabschätzungen wird anschließend für 4-dimensionale global-hyperbolische Lorentz-Mannigfaltigkeiten gezeigt, dass unter der Annahme, dass die konstante Skalarkrümmung der konform äquivalenten Metrik nichtpositiv ist, eine zeitglobale Lösung der Yamabe-Gleichung existiert, die allerdings nicht notwendigerweise positiv ist. Außerdem wird gezeigt, dass, falls die H2-Norm der Skalarkrümmung bezüglich der gegebenen Metrik auf einem kompakten Zeitintervall auf eine bestimmte Weise beschränkt ist, die Lösung positiv auf diesem Zeitintervall ist. Hierbei wird ebenfalls angenommen, dass die konstante Skalarkrümmung der konform äquivalenten Metrik nichtpositiv ist. Falls zusätzlich hierzu gilt, dass die Skalarkrümmung bezüglich der gegebenen Metrik negativ ist und die Metrik gewisse Bedingungen erfüllt, dann ist die Lösung für alle Zeiten in einem kompakten Zeitintervall positiv, auf dem der Gradient der Skalarkrümmung auf eine bestimmte Weise beschränkt ist. In beiden Fällen folgt unter den angeführten Bedingungen die Existenz einer zeitglobalen positiven Lösung, falls M = I x Σ für ein beschränktes offenes Intervall I ist. Zum Schluss wird für M = R x Σ ein Beispiel für die Nichtexistenz einer globalen positiven Lösung angeführt.
Flood loss modeling is a central component of flood risk analysis. Conventionally, this involves univariable and deterministic stage-damage functions. Recent advancements in the field promote the use of multivariable and probabilistic loss models, which consider variables beyond inundation depth and account for prediction uncertainty. Although companies contribute significantly to total loss figures, novel modeling approaches for companies are lacking. Scarce data and the heterogeneity among companies impede the development of company flood loss models. We present three multivariable flood loss models for companies from the manufacturing, commercial, financial, and service sector that intrinsically quantify prediction uncertainty. Based on object-level loss data (n = 1,306), we comparatively evaluate the predictive capacity of Bayesian networks, Bayesian regression, and random forest in relation to deterministic and probabilistic stage-damage functions, serving as benchmarks. The company loss data stem from four postevent surveys in Germany between 2002 and 2013 and include information on flood intensity, company characteristics, emergency response, private precaution, and resulting loss to building, equipment, and goods and stock. We find that the multivariable probabilistic models successfully identify and reproduce essential relationships of flood damage processes in the data. The assessment of model skill focuses on the precision of the probabilistic predictions and reveals that the candidate models outperform the stage-damage functions, while differences among the proposed models are negligible. Although the combination of multivariable and probabilistic loss estimation improves predictive accuracy over the entire data set, wide predictive distributions stress the necessity for the quantification of uncertainty.
Die Erweiterung des natürlichen Zahlbereichs um die positiven Bruchzahlen und die negativen ganzen Zahlen geht für Schülerinnen und Schüler mit großen gedanklichen Hürden und einem Umbruch bis dahin aufgebauter Grundvorstellungen einher. Diese Masterarbeit trägt wesentliche Veränderungen auf der Vorstellungs- und Darstellungsebene für beide Zahlbereiche zusammen und setzt sich mit den kognitiven Herausforderungen für Lernende auseinander. Auf der Grundlage einer Diskussion traditioneller sowie alternativer Lehrgänge der Zahlbereichserweiterung wird eine Unterrichtskonzeption für den Mathematikunterricht entwickelt, die eine parallele Einführung der Bruchzahlen und der negativen Zahlen vorschlägt. Die Empfehlungen der Unterrichtkonzeption erstrecken sich über den Zeitraum von der ersten bis zur siebten Klassenstufe, was der behutsamen Weiterentwicklung und Modifikation des Zahlbegriffs viel Zeit einräumt, und enthalten auch didaktische Überlegungen sowie konkrete Hinweise zu möglichen Aufgabenformaten.
This thesis aims at presenting in an organized fashion the required basics to understand the Glauber dynamics as a way of simulating configurations according to the Gibbs distribution of the Curie-Weiss Potts model. Therefore, essential aspects of discrete-time Markov chains on a finite state space are examined, especially their convergence behavior and related mixing times. Furthermore, special emphasis is placed on a consistent and comprehensive presentation of the Curie-Weiss Potts model and its analysis. Finally, the Glauber dynamics is studied in general and applied afterwards in an exemplary way to the Curie-Weiss model as well as the Curie-Weiss Potts model. The associated considerations are supplemented with two computer simulations aiming to show the cutoff phenomenon and the temperature dependence of the convergence behavior.
The Willmore functional is a function that maps an immersed Riemannian manifold to its total mean curvature. Finding closed surfaces that minimizes the Willmore energy, or more generally finding critical surfaces, is a classic problem of differential geometry.
In this thesis we will develop the concept of generalized Willmore functionals for surfaces in Riemannian manifolds. We are guided by models in mathematical physics, such as the Hawking energy of general relativity and the bending energies for thin membranes.
We prove the existence of minimizers under area constraint for these generalized Willmore functionals in a suitable class of generalized surfaces. In particular, we construct minimizers of the bending energy mentioned above for prescribed area and enclosed volume.
Furthermore, we prove that critical surfaces of generalized Willmore functionals with prescribed area are smooth, away from finitely many points. These results and the following are based on the existing theory for the Willmore functional.
This general discussion is succeeded by a detailed analysis of the Hawking energy. In the context of general relativity the surrounding manifold describes the space at a given time, hence we strive to understand the interplay between the Hawking energy and the ambient space. We characterize points in the surrounding manifold for which there are small critical spheres with prescribed area in any neighborhood. These points are interpreted as concentration points of the Hawking energy.
Additionally, we calculate an expansion of the Hawking energy on small, round spheres. This allows us to identify a kind of energy density of the Hawking energy.
It needs to be mentioned that our results stand in contrast to previous expansions of the Hawking energy. However, these expansions are obtained on spheres along the light cone at a given point. At this point it is not clear how to explain the discrepancy.
Finally, we consider asymptotically Schwarzschild manifolds. They are a special case of asymptotically flat manifolds, which serf as models for isolated systems. The Schwarzschild spacetime itself is a classical solution to the Einstein equations and yields a simple description of a black hole.
In these asymptotically Schwarzschild manifolds we construct a foliation of the exterior region by critical spheres of the Hawking energy with prescribed large area. This foliation can be seen as a generalized notion of the center of mass of the isolated system. Additionally, the Hawking energy of grows along the foliation as the area of the surfaces grows.
We prove a Feynman path integral formula for the unitary group exp(-itL(nu,theta)), t >= 0, associated with a discrete magnetic Schrodinger operator L-nu,L-theta on a large class of weighted infinite graphs. As a consequence, we get a new Kato-Simon estimate
vertical bar exp(- itL(nu,theta))(x,y)vertical bar <= exp( -tL(-deg,0))(x,y),
which controls the unitary group uniformly in the potentials in terms of a Schrodinger semigroup, where the potential deg is the weighted degree function of the graph.
Let M be a compact manifold of dimension n. In this paper, we introduce the Mass Function a >= 0 bar right arrow X-+(M)(a) (resp. a >= 0 bar right arrow X--(M)(a)) which is defined as the supremum (resp. infimum) of the masses of all metrics on M whose Yamabe constant is larger than a and which are flat on a ball of radius 1 and centered at a point p is an element of M. Here, the mass of a metric flat around p is the constant term in the expansion of the Green function of the conformal Laplacian at p. We show that these functions are well defined and have many properties which allow to obtain applications to the Yamabe invariant (i.e. the supremum of Yamabe constants over the set of all metrics on M).
For the time stationary global geomagnetic field, a new modelling concept is presented. A Bayesian non-parametric approach provides realistic location dependent uncertainty estimates. Modelling related variabilities are dealt with systematically by making little subjective apriori assumptions. Rather than parametrizing the model by Gauss coefficients, a functional analytic approach is applied. The geomagnetic potential is assumed a Gaussian process to describe a distribution over functions. Apriori correlations are given by an explicit kernel function with non-informative dipole contribution. A refined modelling strategy is proposed that accommodates non-linearities of archeomagnetic observables: First, a rough field estimate is obtained considering only sites that provide full field vector records. Subsequently, this estimate supports the linearization that incorporates the remaining incomplete records. The comparison of results for the archeomagnetic field over the past 1000 yr is in general agreement with previous models while improved model uncertainty estimates are provided.
This work provides a necessary and sufficient condition for a symbolic dynamical system to admit a sequence of periodic approximations in the Hausdorff topology. The key result proved and applied here uses graphs that are called De Bruijn graphs, Rauzy graphs, or Anderson-Putnam complex, depending on the community. Combining this with a previous result, the present work justifies rigorously the accuracy and reliability of algorithmic methods used to compute numerically the spectra of a large class of self-adjoint operators. The so-called Hamiltonians describe the effective dynamic of a quantum particle in aperiodic media. No restrictions on the structure of these operators other than general regularity assumptions are imposed. In particular, nearest-neighbor correlation is not necessary. Examples for the Fibonacci and the Golay-Rudin-Shapiro sequences are explicitly provided illustrating this discussion. While the first sequence has been thoroughly studied by physicists and mathematicians alike, a shroud of mystery still surrounds the latter when it comes to spectral properties. In light of this, the present paper gives a new result here that might help uncovering a solution.
In this paper, we develop the mathematical tools needed to explore isotopy classes of tilings on hyperbolic surfaces of finite genus, possibly nonorientable, with boundary, and punctured. More specifically, we generalize results on Delaney-Dress combinatorial tiling theory using an extension of mapping class groups to orbifolds, in turn using this to study tilings of covering spaces of orbifolds. Moreover, we study finite subgroups of these mapping class groups. Our results can be used to extend the Delaney-Dress combinatorial encoding of a tiling to yield a finite symbol encoding the complexity of an isotopy class of tilings. The results of this paper provide the basis for a complete and unambiguous enumeration of isotopically distinct tilings of hyperbolic surfaces.
We describe a new, original approach to the modelling of the Earth's magnetic field. The overall objective of this study is to reliably render fast variations of the core field and its secular variation. This method combines a sequential modelling approach, a Kalman filter, and a correlation-based modelling step. Sources that most significantly contribute to the field measured at the surface of the Earth are modelled. Their separation is based on strong prior information on their spatial and temporal behaviours. We obtain a time series of model distributions which display behaviours similar to those of recent models based on more classic approaches, particularly at large temporal and spatial scales. Interesting new features and periodicities are visible in our models at smaller time and spatial scales. An important aspect of our method is to yield reliable error bars for all model parameters. These errors, however, are only as reliable as the description of the different sources and the prior information used are realistic. Finally, we used a slightly different version of our method to produce candidate models for the thirteenth edition of the International Geomagnetic Reference Field.
In this article, we propose an all-in-one statement which includes existence, uniqueness, regularity, and numerical approximations of mild solutions for a class of stochastic partial differential equations (SPDEs) with non-globally monotone nonlinearities. The proof of this result exploits the properties of an existing fully explicit space-time discrete approximation scheme, in particular the fact that it satisfies suitable a priori estimates. We also obtain almost sure and strong convergence of the approximation scheme to the mild solutions of the considered SPDEs. We conclude by applying the main result of the article to the stochastic Burgers equations with additive space-time white noise.
We show how to deduce Rellich inequalities from Hardy inequalities on infinite graphs. Specifically, the obtained Rellich inequality gives an upper bound on a function by the Laplacian of the function in terms of weighted norms. These weights involve the Hardy weight and a function which satisfies an eikonal inequality. The results are proven first for Laplacians and are extended to Schrodinger operators afterwards.
The purpose of this paper is to build an algebraic framework suited to regularize branched structures emanating from rooted forests and which encodes the locality principle. This is achieved by means of the universal properties in the locality framework of properly decorated rooted forests. These universal properties are then applied to derive the multivariate regularization of integrals indexed by rooted forests. We study their renormalization, along the lines of Kreimer's toy model for Feynman integrals.
We present a new model of the geomagnetic field spanning the last 20 years and called Kalmag. Deriving from the assimilation of CHAMP and Swarm vector field measurements, it separates the different contributions to the observable field through parameterized prior covariance matrices. To make the inverse problem numerically feasible, it has been sequentialized in time through the combination of a Kalman filter and a smoothing algorithm. The model provides reliable estimates of past, present and future mean fields and associated uncertainties. The version presented here is an update of our IGRF candidates; the amount of assimilated data has been doubled and the considered time window has been extended from [2000.5, 2019.74] to [2000.5, 2020.33].
Global numerical weather prediction (NWP) models have begun to resolve the mesoscale k(-5/3) range of the energy spectrum, which is known to impose an inherently finite range of deterministic predictability per se as errors develop more rapidly on these scales than on the larger scales. However, the dynamics of these errors under the influence of the synoptic-scale k(-3) range is little studied. Within a perfect-model context, the present work examines the error growth behavior under such a hybrid spectrum in Lorenz's original model of 1969, and in a series of identical-twin perturbation experiments using an idealized two-dimensional barotropic turbulence model at a range of resolutions. With the typical resolution of today's global NWP ensembles, error growth remains largely uniform across scales. The theoretically expected fast error growth characteristic of a k(-5/3) spectrum is seen to be largely suppressed in the first decade of the mesoscale range by the synoptic-scale k(-3) range. However, it emerges once models become fully able to resolve features on something like a 20-km scale, which corresponds to a grid resolution on the order of a few kilometers.
Inferring causal relations from observational time series data is a key problem across science and engineering whenever experimental interventions are infeasible or unethical. Increasing data availability over the past few decades has spurred the development of a plethora of causal discovery methods, each addressing particular challenges of this difficult task. In this paper, we focus on an important challenge that is at the core of time series causal discovery: regime-dependent causal relations. Often dynamical systems feature transitions depending on some, often persistent, unobserved background regime, and different regimes may exhibit different causal relations. Here, we assume a persistent and discrete regime variable leading to a finite number of regimes within which we may assume stationary causal relations. To detect regime-dependent causal relations, we combine the conditional independence-based PCMCI method [based on a condition-selection step (PC) followed by the momentary conditional independence (MCI) test] with a regime learning optimization approach. PCMCI allows for causal discovery from high-dimensional and highly correlated time series. Our method, Regime-PCMCI, is evaluated on a number of numerical experiments demonstrating that it can distinguish regimes with different causal directions, time lags, and sign of causal links, as well as changes in the variables' autocorrelation. Furthermore, Regime-PCMCI is employed to observations of El Nino Southern Oscillation and Indian rainfall, demonstrating skill also in real-world datasets.
Concurrent observation technologies have made high-precision real-time data available in large quantities. Data assimilation (DA) is concerned with how to combine this data with physical models to produce accurate predictions. For spatial-temporal models, the ensemble Kalman filter with proper localisation techniques is considered to be a state-of-the-art DA methodology. This article proposes and investigates a localised ensemble Kalman Bucy filter for nonlinear models with short-range interactions. We derive dimension-independent and component-wise error bounds and show the long time path-wise error only has logarithmic dependence on the time range. The theoretical results are verified through some simple numerical tests.
Process-oriented theories of cognition must be evaluated against time-ordered observations. Here we present a representative example for data assimilation of the SWIFT model, a dynamical model of the control of fixation positions and fixation durations during natural reading of single sentences. First, we develop and test an approximate likelihood function of the model, which is a combination of a spatial, pseudo-marginal likelihood and a temporal likelihood obtained by probability density approximation Second, we implement a Bayesian approach to parameter inference using an adaptive Markov chain Monte Carlo procedure. Our results indicate that model parameters can be estimated reliably for individual subjects. We conclude that approximative Bayesian inference represents a considerable step forward for computational models of eye-movement control, where modeling of individual data on the basis of process-based dynamic models has not been possible so far.
Concurrent observation technologies have made high-precision real-time data available in large quantities. Data assimilation (DA) is concerned with how to combine this data with physical models to produce accurate predictions. For spatial-temporal models, the ensemble Kalman filter with proper localisation techniques is considered to be a state-of-the-art DA methodology. This article proposes and investigates a localised ensemble Kalman Bucy filter for nonlinear models with short-range interactions. We derive dimension-independent and component-wise error bounds and show the long time path-wise error only has logarithmic dependence on the time range. The theoretical results are verified through some simple numerical tests.
Synthetic Aperture Radar (SAR) amplitude measurements from spaceborne sensors are sensitive to surface roughness conditions near their radar wavelength. These backscatter signals are often exploited to assess the roughness of plowed agricultural fields and water surfaces, and less so to complex, heterogeneous geological surfaces. The bedload of mixed sand- and gravel-bed rivers can be considered a mixture of smooth (compacted sand) and rough (gravel) surfaces. Here, we assess backscatter gradients over a large high-mountain alluvial river in the eastern Central Andes with aerially exposed sand and gravel bedload using X-band TerraSAR-X/TanDEM-X, C-band Sentinel-1, and L-band ALOS-2 PALSAR-2 radar scenes. In a first step, we present theory and hypotheses regarding radar response to an alluvial channel bed. We test our hypotheses by comparing backscatter responses over vegetation-free endmember surfaces from inside and outside of the active channel-bed area. We then develop methods to extract smoothed backscatter gradients downstream along the channel using kernel density estimates. In a final step, the local variability of sand-dominated patches is analyzed using Fourier frequency analysis, by fitting stretched-exponential and power-law regression models to the 2-D power spectrum of backscatter amplitude. We find a large range in backscatter depending on the heterogeneity of contiguous smooth- and rough-patches of bedload material. The SAR amplitude signal responds primarily to the fraction of smooth-sand bedload, but is further modified by gravel elements. The sensitivity to gravel is more apparent in longer wavelength L-band radar, whereas C- and X-band is sensitive only to sand variability. Because the spatial extent of smooth sand patches in our study area is typically< 50 m, only higher resolution sensors (e.g., TerraSAR-X/TanDEM-X) are useful for power spectrum analysis. Our results show the potential for mapping sand-gravel transitions and local geomorphic complexity in alluvial rivers with aerially exposed bedload using SAR amplitude.
Author summary <br /> Switching between local and global attention is a general strategy in human information processing. We investigate whether this strategy is a viable approach to model sequences of fixations generated by a human observer in a free viewing task with natural scenes. Variants of the basic model are used to predict the experimental data based on Bayesian inference. Results indicate a high predictive power for both aggregated data and individual differences across observers. The combination of a novel model with state-of-the-art Bayesian methods lends support to our two-state model using local and global internal attention states for controlling eye movements. <br /> Understanding the decision process underlying gaze control is an important question in cognitive neuroscience with applications in diverse fields ranging from psychology to computer vision. The decision for choosing an upcoming saccade target can be framed as a selection process between two states: Should the observer further inspect the information near the current gaze position (local attention) or continue with exploration of other patches of the given scene (global attention)? Here we propose and investigate a mathematical model motivated by switching between these two attentional states during scene viewing. The model is derived from a minimal set of assumptions that generates realistic eye movement behavior. We implemented a Bayesian approach for model parameter inference based on the model's likelihood function. In order to simplify the inference, we applied data augmentation methods that allowed the use of conjugate priors and the construction of an efficient Gibbs sampler. This approach turned out to be numerically efficient and permitted fitting interindividual differences in saccade statistics. Thus, the main contribution of our modeling approach is two-fold; first, we propose a new model for saccade generation in scene viewing. Second, we demonstrate the use of novel methods from Bayesian inference in the field of scan path modeling.
In this paper, we present the convergence rate analysis of the modified Landweber method under logarithmic source condition for nonlinear ill-posed problems. The regularization parameter is chosen according to the discrepancy principle. The reconstructions of the shape of an unknown domain for an inverse potential problem by using the modified Landweber method are exhibited.
Several numerical tools designed to overcome the challenges of smoothing in a non-linear and non-Gaussian setting are investigated for a class of particle smoothers. The considered family of smoothers is induced by the class of linear ensemble transform filters which contains classical filters such as the stochastic ensemble Kalman filter, the ensemble square root filter, and the recently introduced nonlinear ensemble transform filter. Further the ensemble transform particle smoother is introduced and particularly highlighted as it is consistent in the particle limit and does not require assumptions with respect to the family of the posterior distribution. The linear update pattern of the considered class of linear ensemble transform smoothers allows one to implement important supplementary techniques such as adaptive spread corrections, hybrid formulations, and localization in order to facilitate their application to complex estimation problems. These additional features are derived and numerically investigated for a sequence of increasingly challenging test problems.
In this paper, we develop the mathematical tools needed to explore isotopy classes of tilings on hyperbolic surfaces of finite genus, possibly nonorientable, with boundary, and punctured. More specifically, we generalize results on Delaney-Dress combinatorial tiling theory using an extension of mapping class groups to orbifolds, in turn using this to study tilings of covering spaces of orbifolds. Moreover, we study finite subgroups of these mapping class groups. Our results can be used to extend the Delaney-Dress combinatorial encoding of a tiling to yield a finite symbol encoding the complexity of an isotopy class of tilings. The results of this paper provide the basis for a complete and unambiguous enumeration of isotopically distinct tilings of hyperbolic surfaces.
Purpose The anatomy of the circle of Willis (CoW), the brain's main arterial blood supply system, strongly differs between individuals, resulting in highly variable flow fields and intracranial vascularization patterns. To predict subject-specific hemodynamics with high certainty, we propose a data assimilation (DA) approach that merges fully 4D phase-contrast magnetic resonance imaging (PC-MRI) data with a numerical model in the form of computational fluid dynamics (CFD) simulations. Methods To the best of our knowledge, this study is the first to provide a transient state estimate for the three-dimensional velocity field in a subject-specific CoW geometry using DA. High-resolution velocity state estimates are obtained using the local ensemble transform Kalman filter (LETKF). Results Quantitative evaluation shows a considerable reduction (up to 90%) in the uncertainty of the velocity field state estimate after the data assimilation step. Velocity values in vessel areas that are below the resolution of the PC-MRI data (e.g., in posterior communicating arteries) are provided. Furthermore, the uncertainty of the analysis-based wall shear stress distribution is reduced by a factor of 2 for the data assimilation approach when compared to the CFD model alone. Conclusion This study demonstrates the potential of data assimilation to provide detailed information on vascular flow, and to reduce the uncertainty in such estimates by combining various sources of data in a statistically appropriate fashion.
The XI international conference Stochastic and Analytic Methods in Mathematical Physics was held in Yerevan 2 – 7 September 2019 and was dedicated to the memory of the great mathematician Robert Adol’fovich Minlos, who passed away in January 2018.
The present volume collects a large majority of the contributions presented at the conference on the following domains of contemporary interest: classical and quantum statistical physics, mathematical methods in quantum mechanics, stochastic analysis, applications of point processes in statistical mechanics. The authors are specialists from Armenia, Czech Republic, Denmark, France, Germany, Italy, Japan, Lithuania, Russia, UK and Uzbekistan.
A particular aim of this volume is to offer young scientists basic material in order to inspire their future research in the wide fields presented here.
In this article, we propose an all-in-one statement which includes existence, uniqueness, regularity, and numerical approximations of mild solutions for a class of stochastic partial differential equations (SPDEs) with non-globally monotone nonlinearities. The proof of this result exploits the properties of an existing fully explicit space-time discrete approximation scheme, in particular the fact that it satisfies suitable a priori estimates. We also obtain almost sure and strong convergence of the approximation scheme to the mild solutions of the considered SPDEs. We conclude by applying the main result of the article to the stochastic Burgers equations with additive space-time white noise.
Relationship between large-scale ionospheric field-aligned currents and electron/ion precipitations
(2020)
In this study, we have derived field-aligned currents (FACs) from magnetometers onboard the Defense Meteorological Satellite Project (DMSP) satellites. The magnetic latitude versus local time distribution of FACs from DMSP shows comparable dependences with previous findings on the intensity and orientation of interplanetary magnetic field (IMF)B(y)andB(z)components, which confirms the reliability of DMSP FAC data set. With simultaneous measurements of precipitating particles from DMSP, we further investigate the relation between large-scale FACs and precipitating particles. Our result shows that precipitation electron and ion fluxes both increase in magnitude and extend to lower latitude for enhanced southward IMFBz, which is similar to the behavior of FACs. Under weak northward and southwardB(z)conditions, the locations of the R2 current maxima, at both dusk and dawn sides and in both hemispheres, are found to be close to the maxima of the particle energy fluxes; while for the same IMF conditions, R1 currents are displaced further to the respective particle flux peaks. Largest displacement (about 3.5 degrees) is found between the downward R1 current and ion flux peak at the dawn side. Our results suggest that there exists systematic differences in locations of electron/ion precipitation and large-scale upward/downward FACs. As outlined by the statistical mean of these two parameters, the FAC peaks enclose the particle energy flux peaks in an auroral band at both dusk and dawn sides. Our comparisons also found that particle precipitation at dawn and dusk and in both hemispheres maximizes near the mean R2 current peaks. The particle precipitation flux maxima closer to the R1 current peaks are lower in magnitude. This is opposite to the known feature that R1 currents are on average stronger than R2 currents.
We show how to deduce Rellich inequalities from Hardy inequalities on infinite graphs. Specifically, the obtained Rellich inequality gives an upper bound on a function by the Laplacian of the function in terms of weighted norms. These weights involve the Hardy weight and a function which satisfies an eikonal inequality. The results are proven first for Laplacians and are extended to Schrodinger operators afterwards.
The rational Krylov subspace method (RKSM) and the low-rank alternating directions implicit (LR-ADI) iteration are established numerical tools for computing low-rank solution factors of large-scale Lyapunov equations. In order to generate the basis vectors for the RKSM, or extend the low-rank factors within the LR-ADI method, the repeated solution to a shifted linear system of equations is necessary. For very large systems this solve is usually implemented using iterative methods, leading to inexact solves within this inner iteration (and therefore to "inexact methods"). We will show that one can terminate this inner iteration before full precision has been reached and still obtain very good accuracy in the final solution to the Lyapunov equation. In particular, for both the RKSM and the LR-ADI method we derive theory for a relaxation strategy (e.g. increasing the solve tolerance of the inner iteration, as the outer iteration proceeds) within the iterative methods for solving the large linear systems. These theoretical choices involve unknown quantities, therefore practical criteria for relaxing the solution tolerance within the inner linear system are then provided. The theory is supported by several numerical examples, which show that the total amount of work for solving Lyapunov equations can be reduced significantly.
Europa Universalis IV
(2020)
We extend our approach of asymptotic parametrix construction for Hamiltonian operators from conical to edge-type singularities which is applicable to coalescence points of two particles of the helium atom and related two electron systems including the hydrogen molecule. Up to second-order, we have calculated the symbols of an asymptotic parametrix of the nonrelativistic Hamiltonian of the helium atom within the Born-Oppenheimer approximation and provide explicit formulas for the corresponding Green operators which encode the asymptotic behavior of the eigenfunctions near an edge.
Based on an analysis of continuous monitoring of farm animal behavior in the region of the 2016 M6.6 Norcia earthquake in Italy, Wikelski et al., 2020; (Seismol Res Lett, 89, 2020, 1238) conclude that animal activity can be anticipated with subsequent seismic activity and that this finding might help to design a "short-term earthquake forecasting method." We show that this result is based on an incomplete analysis and misleading interpretations. Applying state-of-the-art methods of statistics, we demonstrate that the proposed anticipatory patterns cannot be distinguished from random patterns, and consequently, the observed anomalies in animal activity do not have any forecasting power.
Purpose This review provides an overview of the current challenges in oral targeted antineoplastic drug (OAD) dosing and outlines the unexploited value of therapeutic drug monitoring (TDM). Factors influencing the pharmacokinetic exposure in OAD therapy are depicted together with an overview of different TDM approaches. Finally, current evidence for TDM for all approved OADs is reviewed. Methods A comprehensive literature search (covering literature published until April 2020), including primary and secondary scientific literature on pharmacokinetics and dose individualisation strategies for OADs, together with US FDA Clinical Pharmacology and Biopharmaceutics Reviews and the Committee for Medicinal Products for Human Use European Public Assessment Reports was conducted. Results OADs are highly potent drugs, which have substantially changed treatment options for cancer patients. Nevertheless, high pharmacokinetic variability and low treatment adherence are risk factors for treatment failure. TDM is a powerful tool to individualise drug dosing, ensure drug concentrations within the therapeutic window and increase treatment success rates. After reviewing the literature for 71 approved OADs, we show that exposure-response and/or exposure-toxicity relationships have been established for the majority. Moreover, TDM has been proven to be feasible for individualised dosing of abiraterone, everolimus, imatinib, pazopanib, sunitinib and tamoxifen in prospective studies. There is a lack of experience in how to best implement TDM as part of clinical routine in OAD cancer therapy. Conclusion Sub-therapeutic concentrations and severe adverse events are current challenges in OAD treatment, which can both be addressed by the application of TDM-guided dosing, ensuring concentrations within the therapeutic window.
Aim Quantitative and kinetic insights into the drug exposure-disease response relationship might enhance our knowledge on loss of response and support more effective monitoring of inflammatory activity by biomarkers in patients with inflammatory bowel disease (IBD) treated with infliximab (IFX). This study aimed to derive recommendations for dose adjustment and treatment optimisation based on mechanistic characterisation of the relationship between IFX serum concentration and C-reactive protein (CRP) concentration. <br /> Methods Data from an investigator-initiated trial included 121 patients with IBD during IFX maintenance treatment. Serum concentrations of IFX, antidrug antibodies (ADA), CRP, and disease-related covariates were determined at the mid-term and end of a dosing interval. Data were analysed using a pharmacometric nonlinear mixed-effects modelling approach. An IFX exposure-CRP model was generated and applied to evaluate dosing regimens to achieve CRP remission. <br /> Results The generated quantitative model showed that IFX has the potential to inhibit up to 72% (9% relative standard error [RSE]) of CRP synthesis in a patient. IFX concentration leading to 90% of the maximum CRP synthesis inhibition was 18.4 mu g/mL (43% RSE). Presence of ADA was the most influential factor on IFX exposure. With standard dosing strategy, >= 55% of ADA+ patients experienced CRP nonremission. Shortening the dosing interval and co-therapy with immunomodulators were found to be the most beneficial strategies to maintain CRP remission. <br /> Conclusions With the generated model we could for the first time establish a robust relationship between IFX exposure and CRP synthesis inhibition, which could be utilised for treatment optimisation in IBD patients.
Background:
Anti-TNFα monoclonal antibodies (mAbs) are a well-established treatment for patients with Crohn’s disease (CD). However, subtherapeutic concentrations of mAbs have been related to a loss of response during the first year of therapy1. Therefore, an appropriate dosing strategy is crucial to prevent the underexposure of mAbs for those patients. The aim of our study was to assess the impact of different dosing strategies (fixed dose or body size descriptor adapted) on drug exposure and the target concentration attainment for two different anti-TNFα mAbs: infliximab (IFX, body weight (BW)-based dosing) and certolizumab pegol (CZP, fixed dosing). For this purpose, a comprehensive pharmacokinetic (PK) simulation study was performed.
Methods:
A virtual population of 1000 clinically representative CD patients was generated based on the distribution of CD patient characteristics from an in-house clinical database (n = 116). Seven dosing regimens were investigated: fixed dose and per BW, lean BW (LBW), body surface area, height, body mass index and fat-free mass. The individual body size-adjusted doses were calculated from patient generated body size descriptor values. Then, using published PK models for IFX and CZP in CD patients2,3, for each patient, 1000 concentration–time profiles were simulated to consider the typical profile of a specific patient as well as the range of possible individual profiles due to unexplained PK variability across patients. For each dosing strategy, the variability in maximum and minimum mAb concentrations (Cmax and Cmin, respectively), area under the concentration-time curve (AUC) and the per cent of patients reaching target concentration were assessed during maintenance therapy.
Results:
For IFX and CZP, Cmin showed the highest variability between patients (CV ≈110% and CV ≈80%, respectively) with a similar extent across all dosing strategies. For IFX, the per cent of patients reaching the target (Cmin = 5 µg/ml) was similar across all dosing strategies (~15%). For CZP, the per cent of patients reaching the target average concentration of 17 µg/ml ranged substantially (52–71%), being the highest for LBW-adjusted dosing.
Conclusion:
By using a PK simulation approach, different dosing regimen of IFX and CZP revealed the highest variability for Cmin, the most commonly used PK parameter guiding treatment decisions, independent upon dosing regimen. Our results demonstrate similar target attainment with fixed dosing of IFX compared with currently recommended BW-based dosing. For CZP, the current fixed dosing strategy leads to comparable percentage of patients reaching target as the best performing body size-adjusted dosing (66% vs. 71%, respectively).
The Coulomb failure stress (CFS) criterion is the most commonly used method for predicting spatial distributions of aftershocks following large earthquakes. However, large uncertainties are always associated with the calculation of Coulomb stress change. The uncertainties mainly arise due to nonunique slip inversions and unknown receiver faults; especially for the latter, results are highly dependent on the choice of the assumed receiver mechanism. Based on binary tests (aftershocks yes/no), recent studies suggest that alternative stress quantities, a distance-slip probabilistic model as well as deep neural network (DNN) approaches, all are superior to CFS with predefined receiver mechanism. To challenge this conclusion, which might have large implications, we use 289 slip inversions from SRCMOD database to calculate more realistic CFS values for a layered half-space and variable receiver mechanisms. We also analyze the effect of the magnitude cutoff, grid size variation, and aftershock duration to verify the use of receiver operating characteristic (ROC) analysis for the ranking of stress metrics. The observations suggest that introducing a layered half-space does not improve the stress maps and ROC curves. However, results significantly improve for larger aftershocks and shorter time periods but without changing the ranking. We also go beyond binary testing and apply alternative statistics to test the ability to estimate aftershock numbers, which confirm that simple stress metrics perform better than the classic Coulomb failure stress calculations and are also better than the distance-slip probabilistic model.
Let (M-i, g(i))(i is an element of N) be a sequence of spin manifolds with uniform bounded curvature and diameter that converges to a lower-dimensional Riemannian manifold (B, h) in the Gromov-Hausdorff topology. Then, it happens that the spectrum of the Dirac operator converges to the spectrum of a certain first-order elliptic differential operator D-B on B. We give an explicit description of D-B and characterize the special case where D-B equals the Dirac operator on B.
The accepted idea that there exists an inherent finite-time barrier in deterministically predicting atmospheric flows originates from Edward N. Lorenz’s 1969 work based on two-dimensional (2D) turbulence. Yet, known analytic results on the 2D Navier–Stokes (N-S) equations suggest that one can skillfully predict the 2D N-S system indefinitely far ahead should the initial-condition error become sufficiently small, thereby presenting a potential conflict with Lorenz’s theory. Aided by numerical simulations, the present work reexamines Lorenz’s model and reviews both sides of the argument, paying particular attention to the roles played by the slope of the kinetic energy spectrum. It is found that when this slope is shallower than −3, the Lipschitz continuity of analytic solutions (with respect to initial conditions) breaks down as the model resolution increases, unless the viscous range of the real system is resolved—which remains practically impossible. This breakdown leads to the inherent finite-time limit. If, on the other hand, the spectral slope is steeper than −3, then the breakdown does not occur. In this way, the apparent contradiction between the analytic results and Lorenz’s theory is reconciled.
A zig-zag (or fence) order is a special partial order on a (finite) set. In this paper, we consider the semigroup TFn of all order-preserving transformations on an n-element zig-zag-ordered set. We determine the rank of TFn and provide a minimal generating set for TFn. Moreover, a formula for the number of idempotents in TFn is given.
The majority of earthquakes occur unexpectedly and can trigger subsequent sequences of events that can culminate in more powerful earthquakes. This self-exciting nature of seismicity generates complex clustering of earthquakes in space and time. Therefore, the problem of constraining the magnitude of the largest expected earthquake during a future time interval is of critical importance in mitigating earthquake hazard. We address this problem by developing a methodology to compute the probabilities for such extreme earthquakes to be above certain magnitudes. We combine the Bayesian methods with the extreme value theory and assume that the occurrence of earthquakes can be described by the Epidemic Type Aftershock Sequence process. We analyze in detail the application of this methodology to the 2016 Kumamoto, Japan, earthquake sequence. We are able to estimate retrospectively the probabilities of having large subsequent earthquakes during several stages of the evolution of this sequence.
Our first result concerns a characterization by means of a functional equation of Poisson point processes conditioned by the value of their first moment. It leads to a generalized version of Mecke’s formula. En passant, it also allows us to gain quantitative results about stochastic domination for Poisson point processes under linear constraints. Since bridges of a pure jump Lévy process in Rd with a height a can be interpreted as a Poisson point process on space–time conditioned by pinning its first moment to a, our approach allows us to characterize bridges of Lévy processes by means of a functional equation. The latter result has two direct applications: First, we obtain a constructive and simple way to sample Lévy bridge dynamics; second, it allows us to estimate the number of jumps for such bridges. We finally show that our method remains valid for linearly perturbed Lévy processes like periodic Ornstein–Uhlenbeck processes driven by Lévy noise.
This paper concerns the problem of predicting the maximum expected earthquake magnitude μ in a future time interval Tf given a catalog covering a time period T in the past. Different studies show the divergence of the confidence interval of the maximum possible earthquake magnitude m_{ max } for high levels of confidence (Salamat et al. 2017). Therefore, m_{ max } should be better replaced by μ (Holschneider et al. 2011). In a previous study (Salamat et al. 2018), μ is estimated for an instrumental earthquake catalog of Iran from 1900 onwards with a constant level of completeness ( {m0 = 5.5} ). In the current study, the Bayesian methodology developed by Zöller et al. (2014, 2015) is applied for the purpose of predicting μ based on the catalog consisting of both historical and instrumental parts. The catalog is first subdivided into six subcatalogs corresponding to six seismotectonic zones, and each of those zone catalogs is subsequently subdivided according to changes in completeness level and magnitude uncertainty. For this, broad and small error distributions are considered for historical and instrumental earthquakes, respectively. We assume that earthquakes follow a Poisson process in time and Gutenberg-Richter law in the magnitude domain with a priori unknown a and b values which are first estimated by Bayes' theorem and subsequently used to estimate μ. Imposing different values of m_{ max } for different seismotectonic zones namely Alborz, Azerbaijan, Central Iran, Zagros, Kopet Dagh and Makran, the results show considerable probabilities for the occurrence of earthquakes with Mw ≥ 7.5 in short Tf , whereas for long Tf, μ is almost equal to m_{ max }
We show that the Dirac operator on a compact globally hyperbolic Lorentzian spacetime with spacelike Cauchy boundary is a Fredholm operator if appropriate boundary conditions are imposed. We prove that the index of this operator is given by the same expression as in the index formula of Atiyah-Patodi-Singer for Riemannian manifolds with boundary. The index is also shown to equal that of a certain operator constructed from the evolution operator and a spectral projection on the boundary. In case the metric is of product type near the boundary a Feynman parametrix is constructed.
A term t is linear if no variable occurs more than once in t. An identity s ≈ t is said to be linear if s and t are linear terms. Identities are particular formulas. As for terms superposition operations can be defined for formulas too. We define the arbitrary linear formulas and seek for a condition for the set of all linear formulas to be closed under superposition. This will be used to define the partial superposition operations on the set of linear formulas and a partial many-sorted algebra Formclonelin(τ, τ′). This algebra has similar properties with the partial many-sorted clone of all linear terms. We extend the concept of a hypersubstitution of type τ to the linear hypersubstitutions of type (τ, τ′) for algebraic systems. The extensions of linear hypersubstitutions of type (τ, τ′) send linear formulas to linear formulas, presenting weak endomorphisms of Formclonelin(τ, τ′).
A term, also called a tree, is said to be linear, if each variable occurs in the term only once. The linear terms and sets of linear terms, the so-called linear tree languages, play some role in automata theory and in the theory of formal languages in connection with recognizability. We define a partial superposition operation on sets of linear trees of a given type and study the properties of some many-sorted partial clones that have sets of linear trees as elements and partial superposition operations as fundamental operations. The endomorphisms of those algebras correspond to nondeterministic linear hypersubstitutions.
One method of embedding groups into skew fields was introduced by A. I. Mal'tsev and B. H. Neumann (cf. [18, 19]). If G is an ordered group and F is a skew field, the set F((G)) of formal power series over F in G with well-ordered support forms a skew field into which the group ring F[G] can be embedded. Unfortunately it is not suficient that G is left-ordered since F((G)) is only an F-vector space in this case as there is no natural way to define a multiplication on F((G)). One way to extend the original idea onto left-ordered groups is to examine the endomorphism ring of F((G)) as explored by N. I. Dubrovin (cf. [5, 6]). It is possible to embed any crossed product ring F[G; η, σ] into the endomorphism ring of F((G)) such that each non-zero element of F[G; η, σ] defines an automorphism of F((G)) (cf. [5, 10]). Thus, the rational closure of F[G; η, σ] in the endomorphism ring of F((G)), which we will call the Dubrovin-ring of F[G; η, σ], is a potential candidate for a skew field of fractions of F[G; η, σ]. The methods of N. I. Dubrovin allowed to show that specific classes of groups can be embedded into a skew field. For example, N. I. Dubrovin contrived some special criteria, which are applicable on the universal covering group of SL(2, R). These methods have also been explored by J. Gräter and R. P. Sperner (cf. [10]) as well as N.H. Halimi and T. Ito (cf. [11]). Furthermore, it is of interest to know if skew fields of fractions are unique. For example, left and right Ore domains have unique skew fields of fractions (cf. [2]). This is not the general case as for example the free group with 2 generators can be embedded into non-isomorphic skew fields of fractions (cf. [12]). It seems likely that Ore domains are the most general case for which unique skew fields of fractions exist. One approach to gain uniqueness is to restrict the search to skew fields of fractions with additional properties. I. Hughes has defined skew fields of fractions of crossed product rings F[G; η, σ] with locally indicable G which fulfill a special condition. These are called Hughes-free skew fields of fractions and I. Hughes has proven that they are unique if they exist [13, 14]. This thesis will connect the ideas of N. I. Dubrovin and I. Hughes. The first chapter contains the basic terminology and concepts used in this thesis. We present methods provided by N. I. Dubrovin such as the complexity of elements in rational closures and special properties of endomorphisms of the vector space of formal power series F((G)). To combine the ideas of N.I. Dubrovin and I. Hughes we introduce Conradian left-ordered groups of maximal rank and examine their connection to locally indicable groups. Furthermore we provide notations for crossed product rings, skew fields of fractions as well as Dubrovin-rings and prove some technical statements which are used in later parts. The second chapter focuses on Hughes-free skew fields of fractions and their connection to Dubrovin-rings. For that purpose we introduce series representations to interpret elements of Hughes-free skew fields of fractions as skew formal Laurent series. This 1 Introduction allows us to prove that for Conradian left-ordered groups G of maximal rank the statement "F[G; η, σ] has a Hughes-free skew field of fractions" implies "The Dubrovin ring of F [G; η, σ] is a skew field". We will also prove the reverse and apply the results to give a new prove of Theorem 1 in [13]. Furthermore we will show how to extend injective ring homomorphisms of some crossed product rings onto their Hughes-free skew fields of fractions. At last we will be able to answer the open question whether Hughes--free skew fields are strongly Hughes-free (cf. [17, page 53]).
We study the mathematical structure underlying the concept of locality which lies at the heart of classical and quantum field theory, and develop a machinery used to preserve locality during the renormalisation procedure. Viewing renormalisation in the framework of Connes and Kreimer as the algebraic Birkhoff factorisation of characters on a Hopf algebra with values in a Rota-Baxter algebra, we build locality variants of these algebraic structures, leading to a locality variant of the algebraic Birkhoff factorisation. This provides an algebraic formulation of the conservation of locality while renormalising. As an application in the context of the Euler-Maclaurin formula on lattice cones, we renormalise the exponential generating function which sums over the lattice points in a lattice cone. As a consequence, for a suitable multivariate regularisation, renormalisation from the algebraic Birkhoff factorisation amounts to composition by a projection onto holomorphic multivariate germs.
We prove the Fréchet differentiability with respect to the drift of Perron–Frobenius and Koopman operators associated to time-inhomogeneous ordinary stochastic differential equations. This result relies on a similar differentiability result for pathwise expectations of path functionals of the solution of the stochastic differential equation, which we establish using Girsanov's formula. We demonstrate the significance of our result in the context of dynamical systems and operator theory, by proving continuously differentiable drift dependence of the simple eigen- and singular values and the corresponding eigen- and singular functions of the stochastic Perron–Frobenius and Koopman operators.
High-precision observations of the present-day geomagnetic field by ground-based observatories and satellites provide unprecedented conditions for unveiling the dynamics of the Earth’s core. Combining geomagnetic observations with dynamo simulations in a data assimilation (DA) framework allows the reconstruction of past and present states of the internal core dynamics. The essential information that couples the internal state to the observations is provided by the statistical correlations from a numerical dynamo model in the form of a model covariance matrix. Here we test a sequential DA framework, working through a succession of forecast and analysis steps, that extracts the correlations from an ensemble of dynamo models. The primary correlations couple variables of the same azimuthal wave number, reflecting the predominant axial symmetry of the magnetic field. Synthetic tests show that the scheme becomes unstable when confronted with high-precision geomagnetic observations. Our study has identified spurious secondary correlations as the origin of the problem. Keeping only the primary correlations by localizing the covariance matrix with respect to the azimuthal wave number suffices to stabilize the assimilation. While the first analysis step is fundamental in constraining the large-scale interior state, further assimilation steps refine the smaller and more dynamical scales. This refinement turns out to be critical for long-term geomagnetic predictions. Increasing the assimilation steps from one to 18 roughly doubles the prediction horizon for the dipole from about tree to six centuries, and from 30 to about 60 yr for smaller observable scales. This improvement is also reflected on the predictability of surface intensity features such as the South Atlantic Anomaly. Intensity prediction errors are decreased roughly by a half when assimilating long observation sequences.
Particle filters contain the promise of fully nonlinear data assimilation. They have been applied in numerous science areas, including the geosciences, but their application to high-dimensional geoscience systems has been limited due to their inefficiency in high-dimensional systems in standard settings. However, huge progress has been made, and this limitation is disappearing fast due to recent developments in proposal densities, the use of ideas from (optimal) transportation, the use of localization and intelligent adaptive resampling strategies. Furthermore, powerful hybrids between particle filters and ensemble Kalman filters and variational methods have been developed. We present a state-of-the-art discussion of present efforts of developing particle filters for high-dimensional nonlinear geoscience state-estimation problems, with an emphasis on atmospheric and oceanic applications, including many new ideas, derivations and unifications, highlighting hidden connections, including pseudo-code, and generating a valuable tool and guide for the community. Initial experiments show that particle filters can be competitive with present-day methods for numerical weather prediction, suggesting that they will become mainstream soon.
On a smooth complete Riemannian spin manifold with smooth compact boundary, we demonstrate that Atiyah-Singer Dirac operator in depends Riesz continuously on perturbations of local boundary conditions The Lipschitz bound for the map depends on Lipschitz smoothness and ellipticity of and bounds on Ricci curvature and its first derivatives as well as a lower bound on injectivity radius away from a compact neighbourhood of the boundary. More generally, we prove perturbation estimates for functional calculi of elliptic operators on manifolds with local boundary conditions.
Fractures serve as highly conductive preferential flow paths for fluids in rocks, which are difficult to exactly reconstruct in numerical models. Especially, in low-conductive rocks, fractures are often the only pathways for advection of solutes and heat. The presented study compares the results from hydraulic and tracer tomography applied to invert a theoretical discrete fracture network (DFN) that is based on data from synthetic cross-well testing. For hydraulic tomography, pressure pulses in various injection intervals are induced and the pressure responses in the monitoring intervals of a nearby observation well are recorded. For tracer tomography, a conservative tracer is injected in different well levels and the depth-dependent breakthrough of the tracer is monitored. A recently introduced transdimensional Bayesian inversion procedure is applied for both tomographical methods, which adjusts the fracture positions, orientations, and numbers based on given geometrical fracture statistics. The used Metropolis-Hastings-Green algorithm is refined by the simultaneous estimation of the measurement error’s variance, that is, the measurement noise. Based on the presented application to invert the two-dimensional cross-section between source and the receiver well, the hydraulic tomography reveals itself to be more suitable for reconstructing the original DFN. This is based on a probabilistic representation of the inverted results by means of fracture probabilities.
We study the spectral location of a strongly pattern equivariant Hamiltonians arising through configurations on a colored lattice. Roughly speaking, two configurations are "close to each other" if, up to a translation, they "almost coincide" on a large fixed ball. The larger this ball, the more similar they are, and this induces a metric on the space of the corresponding dynamical systems. Our main result states that the map which sends a given configuration into the spectrum of its associated Hamiltonian, is Holder (even Lipschitz) continuous in the usual Hausdorff metric. Specifically, the spectral distance of two Hamiltonians is estimated by the distance of the corresponding dynamical systems.
We obtain a Bernstein-type inequality for sums of Banach-valued random variables satisfying a weak dependence assumption of general type and under certain smoothness assumptions of the underlying Banach norm. We use this inequality in order to investigate in the asymptotical regime the error upper bounds for the broad family of spectral regularization methods for reproducing kernel decision rules, when trained on a sample coming from a tau-mixing process.
We construct eta- and rho-invariants for Dirac operators, on the universal covering of a closed manifold, that are invariant under the projective action associated to a 2-cocycle of the fundamental group. We prove an Atiyah-Patodi-Singer index theorem in this setting, as well as its higher generalisation. Applications concern the classification of positive scalar curvature metrics on closed spin manifolds. We also investigate the properties of these twisted invariants for the signature operator and the relation to the higher invariants.
Tasking machine learning to predict segments of a time series requires estimating the parameters of a ML model with input/output pairs from the time series. We borrow two techniques used in statistical data assimilation in order to accomplish this task: time-delay embedding to prepare our input data and precision annealing as a training method. The precision annealing approach identifies the global minimum of the action (-log[P]). In this way, we are able to identify the number of training pairs required to produce good generalizations (predictions) for the time series. We proceed from a scalar time series s(tn);tn=t0+n Delta t and, using methods of nonlinear time series analysis, show how to produce a DE>1-dimensional time-delay embedding space in which the time series has no false neighbors as does the observed s(tn) time series. In that DE-dimensional space, we explore the use of feedforward multilayer perceptrons as network models operating on DE-dimensional input and producing DE-dimensional outputs.
We study the spectral properties of curl, a linear differential operator of first order acting on differential forms of appropriate degree on an odd-dimensional closed oriented Riemannian manifold. In three dimensions, its eigenvalues are the electromagnetic oscillation frequencies in vacuum without external sources. In general, the spectrum consists of the eigenvalue 0 with infinite multiplicity and further real discrete eigenvalues of finite multiplicity. We compute the Weyl asymptotics and study the zeta-function. We give a sharp lower eigenvalue bound for positively curved manifolds and analyze the equality case. Finally, we compute the spectrum for flat tori, round spheres, and 3-dimensional spherical space forms. Published under license by AIP Publishing.
By adapting the Cheeger-Simons approach to differential cohomology, we establish a notion of differential cohomology with compact support. We show that it is functorial with respect to open embeddings and that it fits into a natural diagram of exact sequences which compare it to compactly supported singular cohomology and differential forms with compact support, in full analogy to ordinary differential cohomology. We prove an excision theorem for differential cohomology using a suitable relative version. Furthermore, we use our model to give an independent proof of Pontryagin duality for differential cohomology recovering a result of [Harvey, Lawson, Zweck - Amer. J. Math. 125 (2003), 791]: On any oriented manifold, ordinary differential cohomology is isomorphic to the smooth Pontryagin dual of compactly supported differential cohomology. For manifolds of finite-type, a similar result is obtained interchanging ordinary with compactly supported differential cohomology.
Data assimilation
(2019)
Data assimilation addresses the general problem of how to combine model-based predictions with partial and noisy observations of the process in an optimal manner. This survey focuses on sequential data assimilation techniques using probabilistic particle-based algorithms. In addition to surveying recent developments for discrete- and continuous-time data assimilation, both in terms of mathematical foundations and algorithmic implementations, we also provide a unifying framework from the perspective of coupling of measures, and Schrödinger’s boundary value problem for stochastic processes in particular.
We continue our study of invariant forms of the classical equations of mathematical physics, such as the Maxwell equations or the Lam´e system, on manifold with boundary. To this end we interpret them in terms of the de Rham complex at a certain step. On using the structure of the complex we get an insight to predict a degeneracy deeply encoded in the equations. In the present paper we develop an invariant approach to the classical Navier-Stokes equations.
In this paper we develop a general framework for constructing and analyzing coupled Markov chain Monte Carlo samplers, allowing for both (possibly degenerate) diffusion and piecewise deterministic Markov processes. For many performance criteria of interest, including the asymptotic variance, the task of finding efficient couplings can be phrased in terms of problems related to optimal transport theory. We investigate general structural properties, proving a singularity theorem that has both geometric and probabilistic interpretations. Moreover, we show that those problems can often be solved approximately and support our findings with numerical experiments. For the particular objective of estimating the variance of a Bayesian posterior, our analysis suggests using novel techniques in the spirit of antithetic variates. Addressing the convergence to equilibrium of coupled processes we furthermore derive a modified Poincare inequality.
Many machine learning problems can be characterized by mutual contamination models. In these problems, one observes several random samples from different convex combinations of a set of unknown base distributions and the goal is to infer these base distributions. This paper considers the general setting where the base distributions are defined on arbitrary probability spaces. We examine three popular machine learning problems that arise in this general setting: multiclass classification with label noise, demixing of mixed membership models, and classification with partial labels. In each case, we give sufficient conditions for identifiability and present algorithms for the infinite and finite sample settings, with associated performance guarantees.
We discuss canonical representations of the de Rham cohomology on a compact manifold with boundary. They are obtained by minimising the energy integral in a Hilbert space of differential forms that belong along with the exterior derivative to the domain of the adjoint operator. The corresponding Euler-Lagrange equations reduce to an elliptic boundary value problem on the manifold, which is usually referred to as the Neumann problem after Spencer.
Packungen aus Kreisscheiben
(2019)
Der englische Seefahrer Sir Walter Raleigh fragte sich einst, wie er in seinem Schiffsladeraum moeglichst viele Kanonenkugeln stapeln koennte. Johannes Kepler entwickelte daraufhin 1611 eine Vermutung ueber die optimale Anordnung der Kugeln. Diese Vermutung sollte sich als eine der haertesten mathematischen Nuesse der Geschichte erweisen. Selbst in der Ebene sind dichteste Packungen kongruenter Kreise eine Herausforderung. 1892 und 1910 veroeffentlichte Axel Thue (kritisierte) Beweise, dass die hexagonale Kreispackung optimal sei. Erst 1940 lieferte Laszlo Fejes Toth schliesslich einen wasserdichten Beweis fuer diese Tatsache. Eine Variante des Problems verlangt,
Packungen mit endlich vielen kongruenten Kugeln zu finden, die eine gewisse quadratische Energie minimieren: Diese spannende geometrische Aufgabe wurde 1967 von Toth gestellt. Sie ist auch heute noch nicht vollstaendig gelaest. In diesem Beitrag schlagen die Autorinnen eine originelle wahrscheinlichkeitstheoretische Methode vor, um in der Ebene Näherungen der Lösung zu konstruieren.