Refine
Year of publication
- 2020 (75) (remove)
Document Type
- Article (60)
- Postprint (5)
- Doctoral Thesis (4)
- Conference Proceeding (2)
- Master's Thesis (2)
- Monograph/Edited Volume (1)
- Part of a Book (1)
Is part of the Bibliography
- yes (75) (remove)
Keywords
- random point processes (4)
- statistical mechanics (4)
- stochastic analysis (4)
- data assimilation (3)
- 26D15 (2)
- 31C20 (2)
- 35B09 (2)
- 35R02 (2)
- 39A12 (primary) (2)
- 58E35 (secondary) (2)
Institute
- Institut für Mathematik (75) (remove)
The estimation of a log-concave density on R is a canonical problem in the area of shape-constrained nonparametric inference. We present a Bayesian nonparametric approach to this problem based on an exponentiated Dirichlet process mixture prior and show that the posterior distribution converges to the log-concave truth at the (near-) minimax rate in Hellinger distance. Our proof proceeds by establishing a general contraction result based on the log-concave maximum likelihood estimator that prevents the need for further metric entropy calculations. We further present computationally more feasible approximations and both an empirical and hierarchical Bayes approach. All priors are illustrated numerically via simulations.
We study the Cauchy problem for a nonlinear elliptic equation with data on a piece S of the boundary surface partial derivative X. By the Cauchy problem is meant any boundary value problem for an unknown function u in a domain X with the property that the data on S, if combined with the differential equations in X, allows one to determine all derivatives of u on S by means of functional equations. In the case of real analytic data of the Cauchy problem, the existence of a local solution near S is guaranteed by the Cauchy-Kovalevskaya theorem. We discuss a variational setting of the Cauchy problem which always possesses a generalized solution.
We study those nonlinear partial differential equations which appear as Euler-Lagrange equations of variational problems. On defining weak boundary values of solutions to such equations we initiate the theory of Lagrangian boundary value problems in spaces of appropriate smoothness. We also analyse if the concept of mapping degree of current importance applies to Lagrangian problems.
Author summary <br /> Switching between local and global attention is a general strategy in human information processing. We investigate whether this strategy is a viable approach to model sequences of fixations generated by a human observer in a free viewing task with natural scenes. Variants of the basic model are used to predict the experimental data based on Bayesian inference. Results indicate a high predictive power for both aggregated data and individual differences across observers. The combination of a novel model with state-of-the-art Bayesian methods lends support to our two-state model using local and global internal attention states for controlling eye movements. <br /> Understanding the decision process underlying gaze control is an important question in cognitive neuroscience with applications in diverse fields ranging from psychology to computer vision. The decision for choosing an upcoming saccade target can be framed as a selection process between two states: Should the observer further inspect the information near the current gaze position (local attention) or continue with exploration of other patches of the given scene (global attention)? Here we propose and investigate a mathematical model motivated by switching between these two attentional states during scene viewing. The model is derived from a minimal set of assumptions that generates realistic eye movement behavior. We implemented a Bayesian approach for model parameter inference based on the model's likelihood function. In order to simplify the inference, we applied data augmentation methods that allowed the use of conjugate priors and the construction of an efficient Gibbs sampler. This approach turned out to be numerically efficient and permitted fitting interindividual differences in saccade statistics. Thus, the main contribution of our modeling approach is two-fold; first, we propose a new model for saccade generation in scene viewing. Second, we demonstrate the use of novel methods from Bayesian inference in the field of scan path modeling.
Author summary <br /> The use of orally inhaled drugs for treating lung diseases is appealing since they have the potential for lung selectivity, i.e. high exposure at the site of action -the lung- without excessive side effects. However, the degree of lung selectivity depends on a large number of factors, including physiochemical properties of drug molecules, patient disease state, and inhalation devices. To predict the impact of these factors on drug exposure and thereby to understand the characteristics of an optimal drug for inhalation, we develop a predictive mathematical framework (a "pharmacokinetic model"). In contrast to previous approaches, our model allows combining knowledge from different sources appropriately and its predictions were able to adequately predict different sets of clinical data. Finally, we compare the impact of different factors and find that the most important factors are the size of the inhaled particles, the affinity of the drug to the lung tissue, as well as the rate of drug dissolution in the lung. In contrast to the common belief, the solubility of a drug in the lining fluids is not found to be relevant. These findings are important to understand how inhaled drugs should be designed to achieve best treatment results in patients. <br /> The fate of orally inhaled drugs is determined by pulmonary pharmacokinetic processes such as particle deposition, pulmonary drug dissolution, and mucociliary clearance. Even though each single process has been systematically investigated, a quantitative understanding on the interaction of processes remains limited and therefore identifying optimal drug and formulation characteristics for orally inhaled drugs is still challenging. To investigate this complex interplay, the pulmonary processes can be integrated into mathematical models. However, existing modeling attempts considerably simplify these processes or are not systematically evaluated against (clinical) data. In this work, we developed a mathematical framework based on physiologically-structured population equations to integrate all relevant pulmonary processes mechanistically. A tailored numerical resolution strategy was chosen and the mechanistic model was evaluated systematically against data from different clinical studies. Without adapting the mechanistic model or estimating kinetic parameters based on individual study data, the developed model was able to predict simultaneously (i) lung retention profiles of inhaled insoluble particles, (ii) particle size-dependent pharmacokinetics of inhaled monodisperse particles, (iii) pharmacokinetic differences between inhaled fluticasone propionate and budesonide, as well as (iv) pharmacokinetic differences between healthy volunteers and asthmatic patients. Finally, to identify the most impactful optimization criteria for orally inhaled drugs, the developed mechanistic model was applied to investigate the impact of input parameters on both the pulmonary and systemic exposure. Interestingly, the solubility of the inhaled drug did not have any relevant impact on the local and systemic pharmacokinetics. Instead, the pulmonary dissolution rate, the particle size, the tissue affinity, and the systemic clearance were the most impactful potential optimization parameters. In the future, the developed prediction framework should be considered a powerful tool for identifying optimal drug and formulation characteristics.
We consider a perturbation of the de Rham complex on a compact manifold with boundary. This perturbation goes beyond the framework of complexes, and so cohomology does not apply to it. On the other hand, its curvature is "small", hence there is a natural way to introduce an Euler characteristic and develop a Lefschetz theory for the perturbation. This work is intended as an attempt to develop a cohomology theory for arbitrary sequences of linear mappings.
The study of the Cauchy problem for solutions of the heat equation in a cylindrical domain with data on the lateral surface by the Fourier method raises the problem of calculating the inverse Laplace transform of the entire function cos root z. This problem has no solution in the standard theory of the Laplace transform. We give an explicit formula for the inverse Laplace transform of cos root z using the theory of analytic functionals. This solution suits well to efficiently develop the regularization of solutions to Cauchy problems for parabolic equations with data on noncharacteristic surfaces.
We propose a computational method (with acronym ALDI) for sampling from a given target distribution based on first-order (overdamped) Langevin dynamics which satisfies the property of affine invariance. The central idea of ALDI is to run an ensemble of particles with their empirical covariance serving as a preconditioner for their underlying Langevin dynamics. ALDI does not require taking the inverse or square root of the empirical covariance matrix, which enables application to high-dimensional sampling problems. The theoretical properties of ALDI are studied in terms of nondegeneracy and ergodicity. Furthermore, we study its connections to diffusion on Riemannian manifolds and Wasserstein gradient flows. Bayesian inference serves as a main application area for ALDI. In case of a forward problem with additive Gaussian measurement errors, ALDI allows for a gradient-free approximation in the spirit of the ensemble Kalman filter. A computational comparison between gradient-free and gradient-based ALDI is provided for a PDE constrained Bayesian inverse problem.
Concurrent observation technologies have made high-precision real-time data available in large quantities. Data assimilation (DA) is concerned with how to combine this data with physical models to produce accurate predictions. For spatial-temporal models, the ensemble Kalman filter with proper localisation techniques is considered to be a state-of-the-art DA methodology. This article proposes and investigates a localised ensemble Kalman Bucy filter for nonlinear models with short-range interactions. We derive dimension-independent and component-wise error bounds and show the long time path-wise error only has logarithmic dependence on the time range. The theoretical results are verified through some simple numerical tests.
Concurrent observation technologies have made high-precision real-time data available in large quantities. Data assimilation (DA) is concerned with how to combine this data with physical models to produce accurate predictions. For spatial-temporal models, the ensemble Kalman filter with proper localisation techniques is considered to be a state-of-the-art DA methodology. This article proposes and investigates a localised ensemble Kalman Bucy filter for nonlinear models with short-range interactions. We derive dimension-independent and component-wise error bounds and show the long time path-wise error only has logarithmic dependence on the time range. The theoretical results are verified through some simple numerical tests.
In the limit (h) over bar -> 0, we analyze a class of Schrödinger operators H-(h) over bar = (h) over bar L-2 + (h) over barW + V .id(epsilon) acting on sections of a vector bundle epsilon over a Riemannian manifold M where L is a Laplace type operator, W is an endomorphism field and the potential energy V has a non-degenerate minimum at some point p is an element of M. We construct quasimodes of WKB-type near p for eigenfunctions associated with the low-lying eigenvalues of H-(h) over bar. These are obtained from eigenfunctions of the associated harmonic oscillator H-p,H-(h) over bar at p, acting on smooth functions on the tangent space.
We study the asymptotics of solutions to the Dirichlet problem in a domain X subset of R3 whose boundary contains a singular point O. In a small neighborhood of this point, the domain has the form {z > root x(2) + y(4)}, i.e., the origin is a nonsymmetric conical point at the boundary. So far, the behavior of solutions to elliptic boundary-value problems has not been studied sufficiently in the case of nonsymmetric singular points. This problem was posed by V.A. Kondrat'ev in 2000. We establish a complete asymptotic expansion of solutions near the singular point.
Process-oriented theories of cognition must be evaluated against time-ordered observations. Here we present a representative example for data assimilation of the SWIFT model, a dynamical model of the control of fixation positions and fixation durations during natural reading of single sentences. First, we develop and test an approximate likelihood function of the model, which is a combination of a spatial, pseudo-marginal likelihood and a temporal likelihood obtained by probability density approximation Second, we implement a Bayesian approach to parameter inference using an adaptive Markov chain Monte Carlo procedure. Our results indicate that model parameters can be estimated reliably for individual subjects. We conclude that approximative Bayesian inference represents a considerable step forward for computational models of eye-movement control, where modeling of individual data on the basis of process-based dynamic models has not been possible so far.
Based on an analysis of continuous monitoring of farm animal behavior in the region of the 2016 M6.6 Norcia earthquake in Italy, Wikelski et al., 2020; (Seismol Res Lett, 89, 2020, 1238) conclude that animal activity can be anticipated with subsequent seismic activity and that this finding might help to design a "short-term earthquake forecasting method." We show that this result is based on an incomplete analysis and misleading interpretations. Applying state-of-the-art methods of statistics, we demonstrate that the proposed anticipatory patterns cannot be distinguished from random patterns, and consequently, the observed anomalies in animal activity do not have any forecasting power.
Understanding the macroscopic behavior of dynamical systems is an important tool to unravel transport mechanisms in complex flows. A decomposition of the state space into coherent sets is a popular way to reveal this essential macroscopic evolution. To compute coherent sets from an aperiodic time-dependent dynamical system we consider the relevant transfer operators and their infinitesimal generators on an augmented space-time manifold. This space-time generator approach avoids trajectory integration and creates a convenient linearization of the aperiodic evolution. This linearization can be further exploited to create a simple and effective spectral optimization methodology for diminishing or enhancing coherence. We obtain explicit solutions for these optimization problems using Lagrange multipliers and illustrate this technique by increasing and decreasing mixing of spatial regions through small velocity field perturbations.
In this paper, we present the convergence rate analysis of the modified Landweber method under logarithmic source condition for nonlinear ill-posed problems. The regularization parameter is chosen according to the discrepancy principle. The reconstructions of the shape of an unknown domain for an inverse potential problem by using the modified Landweber method are exhibited.
In this paper, we present the convergence rate analysis of the modified Landweber method under logarithmic source condition for nonlinear ill-posed problems. The regularization parameter is chosen according to the discrepancy principle. The reconstructions of the shape of an unknown domain for an inverse potential problem by using the modified Landweber method are exhibited.
Classic inversion methods adjust a model with a predefined number of parameters to the observed data. With transdimensional inversion algorithms such as the reversible-jump Markov chain Monte Carlo (rjMCMC), it is possible to vary this number during the inversion and to interpret the observations in a more flexible way. Geoscience imaging applications use this behaviour to automatically adjust model resolution to the inhomogeneities of the investigated system, while keeping the model parameters on an optimal level. The rjMCMC algorithm produces an ensemble as result, a set of model realizations, which together represent the posterior probability distribution of the investigated problem. The realizations are evolved via sequential updates from a randomly chosen initial solution and converge toward the target posterior distribution of the inverse problem. Up to a point in the chain, the realizations may be strongly biased by the initial model, and must be discarded from the final ensemble. With convergence assessment techniques, this point in the chain can be identified. Transdimensional MCMC methods produce ensembles that are not suitable for classic convergence assessment techniques because of the changes in parameter numbers. To overcome this hurdle, three solutions are introduced to convert model realizations to a common dimensionality while maintaining the statistical characteristics of the ensemble. A scalar, a vector and a matrix representation for models is presented, inferred from tomographic subsurface investigations, and three classic convergence assessment techniques are applied on them. It is shown that appropriately chosen scalar conversions of the models could retain similar statistical ensemble properties as geologic projections created by rasterization.
For the time stationary global geomagnetic field, a new modelling concept is presented. A Bayesian non-parametric approach provides realistic location dependent uncertainty estimates. Modelling related variabilities are dealt with systematically by making little subjective apriori assumptions. Rather than parametrizing the model by Gauss coefficients, a functional analytic approach is applied. The geomagnetic potential is assumed a Gaussian process to describe a distribution over functions. Apriori correlations are given by an explicit kernel function with non-informative dipole contribution. A refined modelling strategy is proposed that accommodates non-linearities of archeomagnetic observables: First, a rough field estimate is obtained considering only sites that provide full field vector records. Subsequently, this estimate supports the linearization that incorporates the remaining incomplete records. The comparison of results for the archeomagnetic field over the past 1000 yr is in general agreement with previous models while improved model uncertainty estimates are provided.
Im Jahre 1960 behauptete Yamabe folgende Aussage bewiesen zu haben: Auf jeder kompakten Riemannschen Mannigfaltigkeit (M,g) der Dimension n ≥ 3 existiert eine zu g konform äquivalente Metrik mit konstanter Skalarkrümmung. Diese Aussage ist äquivalent zur Existenz einer Lösung einer bestimmten semilinearen elliptischen Differentialgleichung, der Yamabe-Gleichung. 1968 fand Trudinger einen Fehler in seinem Beweis und infolgedessen beschäftigten sich viele Mathematiker mit diesem nach Yamabe benannten Yamabe-Problem. In den 80er Jahren konnte durch die Arbeiten von Trudinger, Aubin und Schoen gezeigt werden, dass diese Aussage tatsächlich zutrifft. Dadurch ergeben sich viele Vorteile, z.B. kann beim Analysieren von konform invarianten partiellen Differentialgleichungen auf kompakten Riemannschen Mannigfaltigkeiten die Skalarkrümmung als konstant vorausgesetzt werden.
Es stellt sich nun die Frage, ob die entsprechende Aussage auch auf Lorentz-Mannigfaltigkeiten gilt. Das Lorentz'sche Yamabe Problem lautet somit: Existiert zu einer gegebenen räumlich kompakten global-hyperbolischen Lorentz-Mannigfaltigkeit (M,g) eine zu g konform äquivalente Metrik mit konstanter Skalarkrümmung? Das Ziel dieser Arbeit ist es, dieses Problem zu untersuchen.
Bei der sich aus dieser Fragestellung ergebenden Yamabe-Gleichung handelt es sich um eine semilineare Wellengleichung, deren Lösung eine positive glatte Funktion ist und aus der sich der konforme Faktor ergibt. Um die für die Behandlung des Yamabe-Problems benötigten Grundlagen so allgemein wie möglich zu halten, wird im ersten Teil dieser Arbeit die lokale Existenztheorie für beliebige semilineare Wellengleichungen für Schnitte auf Vektorbündeln im Rahmen eines Cauchy-Problems entwickelt. Hierzu wird der Umkehrsatz für Banachräume angewendet, um mithilfe von bereits existierenden Existenzergebnissen zu linearen Wellengleichungen, Existenzaussagen zu semilinearen Wellengleichungen machen zu können. Es wird bewiesen, dass, falls die Nichtlinearität bestimmte Bedingungen erfüllt, eine fast zeitglobale Lösung des Cauchy-Problems für kleine Anfangsdaten sowie eine zeitlokale Lösung für beliebige Anfangsdaten existiert.
Der zweite Teil der Arbeit befasst sich mit der Yamabe-Gleichung auf global-hyperbolischen Lorentz-Mannigfaltigkeiten. Zuerst wird gezeigt, dass die Nichtlinearität der Yamabe-Gleichung die geforderten Bedingungen aus dem ersten Teil erfüllt, so dass, falls die Skalarkrümmung der gegebenen Metrik nahe an einer Konstanten liegt, kleine Anfangsdaten existieren, so dass die Yamabe-Gleichung eine fast zeitglobale Lösung besitzt. Mithilfe von Energieabschätzungen wird anschließend für 4-dimensionale global-hyperbolische Lorentz-Mannigfaltigkeiten gezeigt, dass unter der Annahme, dass die konstante Skalarkrümmung der konform äquivalenten Metrik nichtpositiv ist, eine zeitglobale Lösung der Yamabe-Gleichung existiert, die allerdings nicht notwendigerweise positiv ist. Außerdem wird gezeigt, dass, falls die H2-Norm der Skalarkrümmung bezüglich der gegebenen Metrik auf einem kompakten Zeitintervall auf eine bestimmte Weise beschränkt ist, die Lösung positiv auf diesem Zeitintervall ist. Hierbei wird ebenfalls angenommen, dass die konstante Skalarkrümmung der konform äquivalenten Metrik nichtpositiv ist. Falls zusätzlich hierzu gilt, dass die Skalarkrümmung bezüglich der gegebenen Metrik negativ ist und die Metrik gewisse Bedingungen erfüllt, dann ist die Lösung für alle Zeiten in einem kompakten Zeitintervall positiv, auf dem der Gradient der Skalarkrümmung auf eine bestimmte Weise beschränkt ist. In beiden Fällen folgt unter den angeführten Bedingungen die Existenz einer zeitglobalen positiven Lösung, falls M = I x Σ für ein beschränktes offenes Intervall I ist. Zum Schluss wird für M = R x Σ ein Beispiel für die Nichtexistenz einer globalen positiven Lösung angeführt.
Arborified zeta values are defined as iterated series and integrals using the universal properties of rooted trees. This approach allows to study their convergence domain and to relate them to multiple zeta values. Generalisations to rooted trees of the stuffle and shuffle products are defined and studied. It is further shown that arborified zeta values are algebra morphisms for these new products on trees.
Several numerical tools designed to overcome the challenges of smoothing in a non-linear and non-Gaussian setting are investigated for a class of particle smoothers. The considered family of smoothers is induced by the class of linear ensemble transform filters which contains classical filters such as the stochastic ensemble Kalman filter, the ensemble square root filter, and the recently introduced nonlinear ensemble transform filter. Further the ensemble transform particle smoother is introduced and particularly highlighted as it is consistent in the particle limit and does not require assumptions with respect to the family of the posterior distribution. The linear update pattern of the considered class of linear ensemble transform smoothers allows one to implement important supplementary techniques such as adaptive spread corrections, hybrid formulations, and localization in order to facilitate their application to complex estimation problems. These additional features are derived and numerically investigated for a sequence of increasingly challenging test problems.
Europa Universalis IV
(2020)
Background:
Anti-TNFα monoclonal antibodies (mAbs) are a well-established treatment for patients with Crohn’s disease (CD). However, subtherapeutic concentrations of mAbs have been related to a loss of response during the first year of therapy1. Therefore, an appropriate dosing strategy is crucial to prevent the underexposure of mAbs for those patients. The aim of our study was to assess the impact of different dosing strategies (fixed dose or body size descriptor adapted) on drug exposure and the target concentration attainment for two different anti-TNFα mAbs: infliximab (IFX, body weight (BW)-based dosing) and certolizumab pegol (CZP, fixed dosing). For this purpose, a comprehensive pharmacokinetic (PK) simulation study was performed.
Methods:
A virtual population of 1000 clinically representative CD patients was generated based on the distribution of CD patient characteristics from an in-house clinical database (n = 116). Seven dosing regimens were investigated: fixed dose and per BW, lean BW (LBW), body surface area, height, body mass index and fat-free mass. The individual body size-adjusted doses were calculated from patient generated body size descriptor values. Then, using published PK models for IFX and CZP in CD patients2,3, for each patient, 1000 concentration–time profiles were simulated to consider the typical profile of a specific patient as well as the range of possible individual profiles due to unexplained PK variability across patients. For each dosing strategy, the variability in maximum and minimum mAb concentrations (Cmax and Cmin, respectively), area under the concentration-time curve (AUC) and the per cent of patients reaching target concentration were assessed during maintenance therapy.
Results:
For IFX and CZP, Cmin showed the highest variability between patients (CV ≈110% and CV ≈80%, respectively) with a similar extent across all dosing strategies. For IFX, the per cent of patients reaching the target (Cmin = 5 µg/ml) was similar across all dosing strategies (~15%). For CZP, the per cent of patients reaching the target average concentration of 17 µg/ml ranged substantially (52–71%), being the highest for LBW-adjusted dosing.
Conclusion:
By using a PK simulation approach, different dosing regimen of IFX and CZP revealed the highest variability for Cmin, the most commonly used PK parameter guiding treatment decisions, independent upon dosing regimen. Our results demonstrate similar target attainment with fixed dosing of IFX compared with currently recommended BW-based dosing. For CZP, the current fixed dosing strategy leads to comparable percentage of patients reaching target as the best performing body size-adjusted dosing (66% vs. 71%, respectively).
In this article, we propose an all-in-one statement which includes existence, uniqueness, regularity, and numerical approximations of mild solutions for a class of stochastic partial differential equations (SPDEs) with non-globally monotone nonlinearities. The proof of this result exploits the properties of an existing fully explicit space-time discrete approximation scheme, in particular the fact that it satisfies suitable a priori estimates. We also obtain almost sure and strong convergence of the approximation scheme to the mild solutions of the considered SPDEs. We conclude by applying the main result of the article to the stochastic Burgers equations with additive space-time white noise.
In this article, we propose an all-in-one statement which includes existence, uniqueness, regularity, and numerical approximations of mild solutions for a class of stochastic partial differential equations (SPDEs) with non-globally monotone nonlinearities. The proof of this result exploits the properties of an existing fully explicit space-time discrete approximation scheme, in particular the fact that it satisfies suitable a priori estimates. We also obtain almost sure and strong convergence of the approximation scheme to the mild solutions of the considered SPDEs. We conclude by applying the main result of the article to the stochastic Burgers equations with additive space-time white noise.
We extend our approach of asymptotic parametrix construction for Hamiltonian operators from conical to edge-type singularities which is applicable to coalescence points of two particles of the helium atom and related two electron systems including the hydrogen molecule. Up to second-order, we have calculated the symbols of an asymptotic parametrix of the nonrelativistic Hamiltonian of the helium atom within the Born-Oppenheimer approximation and provide explicit formulas for the corresponding Green operators which encode the asymptotic behavior of the eigenfunctions near an edge.
We prove a Feynman path integral formula for the unitary group exp(-itL(nu,theta)), t >= 0, associated with a discrete magnetic Schrodinger operator L-nu,L-theta on a large class of weighted infinite graphs. As a consequence, we get a new Kato-Simon estimate
vertical bar exp(- itL(nu,theta))(x,y)vertical bar <= exp( -tL(-deg,0))(x,y),
which controls the unitary group uniformly in the potentials in terms of a Schrodinger semigroup, where the potential deg is the weighted degree function of the graph.
Let D be a division ring of fractions of a crossed product F[G, eta, alpha], where F is a skew field and G is a group with Conradian left-order <=. For D we introduce the notion of freeness with respect to <= and show that D is free in this sense if and only if D can canonically be embedded into the endomorphism ring of the right F-vector space F((G)) of all formal power series in G over F with respect to <=. From this we obtain that all division rings of fractions of F[G, eta, alpha] which are free with respect to at least one Conradian left-order of G are isomorphic and that they are free with respect to any Conradian left-order of G. Moreover, F[G, eta, alpha] possesses a division ring of fraction which is free in this sense if and only if the rational closure of F[G, eta, alpha] in the endomorphism ring of the corresponding right F-vector space F((G)) is a skew field.
We show how to deduce Rellich inequalities from Hardy inequalities on infinite graphs. Specifically, the obtained Rellich inequality gives an upper bound on a function by the Laplacian of the function in terms of weighted norms. These weights involve the Hardy weight and a function which satisfies an eikonal inequality. The results are proven first for Laplacians and are extended to Schrodinger operators afterwards.
We show how to deduce Rellich inequalities from Hardy inequalities on infinite graphs. Specifically, the obtained Rellich inequality gives an upper bound on a function by the Laplacian of the function in terms of weighted norms. These weights involve the Hardy weight and a function which satisfies an eikonal inequality. The results are proven first for Laplacians and are extended to Schrodinger operators afterwards.
Die Erweiterung des natürlichen Zahlbereichs um die positiven Bruchzahlen und die negativen ganzen Zahlen geht für Schülerinnen und Schüler mit großen gedanklichen Hürden und einem Umbruch bis dahin aufgebauter Grundvorstellungen einher. Diese Masterarbeit trägt wesentliche Veränderungen auf der Vorstellungs- und Darstellungsebene für beide Zahlbereiche zusammen und setzt sich mit den kognitiven Herausforderungen für Lernende auseinander. Auf der Grundlage einer Diskussion traditioneller sowie alternativer Lehrgänge der Zahlbereichserweiterung wird eine Unterrichtskonzeption für den Mathematikunterricht entwickelt, die eine parallele Einführung der Bruchzahlen und der negativen Zahlen vorschlägt. Die Empfehlungen der Unterrichtkonzeption erstrecken sich über den Zeitraum von der ersten bis zur siebten Klassenstufe, was der behutsamen Weiterentwicklung und Modifikation des Zahlbegriffs viel Zeit einräumt, und enthalten auch didaktische Überlegungen sowie konkrete Hinweise zu möglichen Aufgabenformaten.
We consider rough metrics on smooth manifolds and corresponding Laplacians induced by such metrics. We demonstrate that globally continuous heat kernels exist and are Holder continuous locally in space and time. This is done via local parabolic Harnack estimates for weak solutions of operators in divergence form with bounded measurable coefficients in weighted Sobolev spaces.
Purpose The anatomy of the circle of Willis (CoW), the brain's main arterial blood supply system, strongly differs between individuals, resulting in highly variable flow fields and intracranial vascularization patterns. To predict subject-specific hemodynamics with high certainty, we propose a data assimilation (DA) approach that merges fully 4D phase-contrast magnetic resonance imaging (PC-MRI) data with a numerical model in the form of computational fluid dynamics (CFD) simulations. Methods To the best of our knowledge, this study is the first to provide a transient state estimate for the three-dimensional velocity field in a subject-specific CoW geometry using DA. High-resolution velocity state estimates are obtained using the local ensemble transform Kalman filter (LETKF). Results Quantitative evaluation shows a considerable reduction (up to 90%) in the uncertainty of the velocity field state estimate after the data assimilation step. Velocity values in vessel areas that are below the resolution of the PC-MRI data (e.g., in posterior communicating arteries) are provided. Furthermore, the uncertainty of the analysis-based wall shear stress distribution is reduced by a factor of 2 for the data assimilation approach when compared to the CFD model alone. Conclusion This study demonstrates the potential of data assimilation to provide detailed information on vascular flow, and to reduce the uncertainty in such estimates by combining various sources of data in a statistically appropriate fashion.
This thesis is concerned with Data Assimilation, the process of combining model predictions with observations. So called filters are of special interest. One is inter- ested in computing the probability distribution of the state of a physical process in the future, given (possibly) imperfect measurements. This is done using Bayes’ rule. The first part focuses on hybrid filters, that bridge between the two main groups of filters: ensemble Kalman filters (EnKF) and particle filters. The first are a group of very stable and computationally cheap algorithms, but they request certain strong assumptions. Particle filters on the other hand are more generally applicable, but computationally expensive and as such not always suitable for high dimensional systems. Therefore it exists a need to combine both groups to benefit from the advantages of each. This can be achieved by splitting the likelihood function, when assimilating a new observation and treating one part of it with an EnKF and the other part with a particle filter.
The second part of this thesis deals with the application of Data Assimilation to multi-scale models and the problems that arise from that. One of the main areas of application for Data Assimilation techniques is predicting the development of oceans and the atmosphere. These processes involve several scales and often balance rela- tions between the state variables. The use of Data Assimilation procedures most often violates relations of that kind, which leads to unrealistic and non-physical pre- dictions of the future development of the process eventually. This work discusses the inclusion of a post-processing step after each assimilation step, in which a minimi- sation problem is solved, which penalises the imbalance. This method is tested on four different models, two Hamiltonian systems and two spatially extended models, which adds even more difficulties.
Global numerical weather prediction (NWP) models have begun to resolve the mesoscale k(-5/3) range of the energy spectrum, which is known to impose an inherently finite range of deterministic predictability per se as errors develop more rapidly on these scales than on the larger scales. However, the dynamics of these errors under the influence of the synoptic-scale k(-3) range is little studied. Within a perfect-model context, the present work examines the error growth behavior under such a hybrid spectrum in Lorenz's original model of 1969, and in a series of identical-twin perturbation experiments using an idealized two-dimensional barotropic turbulence model at a range of resolutions. With the typical resolution of today's global NWP ensembles, error growth remains largely uniform across scales. The theoretically expected fast error growth characteristic of a k(-5/3) spectrum is seen to be largely suppressed in the first decade of the mesoscale range by the synoptic-scale k(-3) range. However, it emerges once models become fully able to resolve features on something like a 20-km scale, which corresponds to a grid resolution on the order of a few kilometers.
The rational Krylov subspace method (RKSM) and the low-rank alternating directions implicit (LR-ADI) iteration are established numerical tools for computing low-rank solution factors of large-scale Lyapunov equations. In order to generate the basis vectors for the RKSM, or extend the low-rank factors within the LR-ADI method, the repeated solution to a shifted linear system of equations is necessary. For very large systems this solve is usually implemented using iterative methods, leading to inexact solves within this inner iteration (and therefore to "inexact methods"). We will show that one can terminate this inner iteration before full precision has been reached and still obtain very good accuracy in the final solution to the Lyapunov equation. In particular, for both the RKSM and the LR-ADI method we derive theory for a relaxation strategy (e.g. increasing the solve tolerance of the inner iteration, as the outer iteration proceeds) within the iterative methods for solving the large linear systems. These theoretical choices involve unknown quantities, therefore practical criteria for relaxing the solution tolerance within the inner linear system are then provided. The theory is supported by several numerical examples, which show that the total amount of work for solving Lyapunov equations can be reduced significantly.
Interacting particle solutions of Fokker–Planck equations through gradient–log–density estimation
(2020)
Fokker-Planck equations are extensively employed in various scientific fields as they characterise the behaviour of stochastic systems at the level of probability density functions. Although broadly used, they allow for analytical treatment only in limited settings, and often it is inevitable to resort to numerical solutions. Here, we develop a computational approach for simulating the time evolution of Fokker-Planck solutions in terms of a mean field limit of an interacting particle system. The interactions between particles are determined by the gradient of the logarithm of the particle density, approximated here by a novel statistical estimator. The performance of our method shows promising results, with more accurate and less fluctuating statistics compared to direct stochastic simulations of comparable particle number. Taken together, our framework allows for effortless and reliable particle-based simulations of Fokker-Planck equations in low and moderate dimensions. The proposed gradient-log-density estimator is also of independent interest, for example, in the context of optimal control.
This thesis aims at presenting in an organized fashion the required basics to understand the Glauber dynamics as a way of simulating configurations according to the Gibbs distribution of the Curie-Weiss Potts model. Therefore, essential aspects of discrete-time Markov chains on a finite state space are examined, especially their convergence behavior and related mixing times. Furthermore, special emphasis is placed on a consistent and comprehensive presentation of the Curie-Weiss Potts model and its analysis. Finally, the Glauber dynamics is studied in general and applied afterwards in an exemplary way to the Curie-Weiss model as well as the Curie-Weiss Potts model. The associated considerations are supplemented with two computer simulations aiming to show the cutoff phenomenon and the temperature dependence of the convergence behavior.
The Coulomb failure stress (CFS) criterion is the most commonly used method for predicting spatial distributions of aftershocks following large earthquakes. However, large uncertainties are always associated with the calculation of Coulomb stress change. The uncertainties mainly arise due to nonunique slip inversions and unknown receiver faults; especially for the latter, results are highly dependent on the choice of the assumed receiver mechanism. Based on binary tests (aftershocks yes/no), recent studies suggest that alternative stress quantities, a distance-slip probabilistic model as well as deep neural network (DNN) approaches, all are superior to CFS with predefined receiver mechanism. To challenge this conclusion, which might have large implications, we use 289 slip inversions from SRCMOD database to calculate more realistic CFS values for a layered half-space and variable receiver mechanisms. We also analyze the effect of the magnitude cutoff, grid size variation, and aftershock duration to verify the use of receiver operating characteristic (ROC) analysis for the ranking of stress metrics. The observations suggest that introducing a layered half-space does not improve the stress maps and ROC curves. However, results significantly improve for larger aftershocks and shorter time periods but without changing the ranking. We also go beyond binary testing and apply alternative statistics to test the ability to estimate aftershock numbers, which confirm that simple stress metrics perform better than the classic Coulomb failure stress calculations and are also better than the distance-slip probabilistic model.
In this paper, we develop the mathematical tools needed to explore isotopy classes of tilings on hyperbolic surfaces of finite genus, possibly nonorientable, with boundary, and punctured. More specifically, we generalize results on Delaney-Dress combinatorial tiling theory using an extension of mapping class groups to orbifolds, in turn using this to study tilings of covering spaces of orbifolds. Moreover, we study finite subgroups of these mapping class groups. Our results can be used to extend the Delaney-Dress combinatorial encoding of a tiling to yield a finite symbol encoding the complexity of an isotopy class of tilings. The results of this paper provide the basis for a complete and unambiguous enumeration of isotopically distinct tilings of hyperbolic surfaces.
In this paper, we develop the mathematical tools needed to explore isotopy classes of tilings on hyperbolic surfaces of finite genus, possibly nonorientable, with boundary, and punctured. More specifically, we generalize results on Delaney-Dress combinatorial tiling theory using an extension of mapping class groups to orbifolds, in turn using this to study tilings of covering spaces of orbifolds. Moreover, we study finite subgroups of these mapping class groups. Our results can be used to extend the Delaney-Dress combinatorial encoding of a tiling to yield a finite symbol encoding the complexity of an isotopy class of tilings. The results of this paper provide the basis for a complete and unambiguous enumeration of isotopically distinct tilings of hyperbolic surfaces.
We investigate if kernel regularization methods can achieve minimax convergence rates over a source condition regularity assumption for the target function. These questions have been considered in past literature, but only under specific assumptions about the decay, typically polynomial, of the spectrum of the the kernel mapping covariance operator. In the perspective of distribution-free results, we investigate this issue under much weaker assumption on the eigenvalue decay, allowing for more complex behavior that can reflect different structure of the data at different scales.
The purpose of this paper is to build an algebraic framework suited to regularize branched structures emanating from rooted forests and which encodes the locality principle. This is achieved by means of the universal properties in the locality framework of properly decorated rooted forests. These universal properties are then applied to derive the multivariate regularization of integrals indexed by rooted forests. We study their renormalization, along the lines of Kreimer's toy model for Feynman integrals.
We construct marked Gibbs point processes in R-d under quite general assumptions. Firstly, we allow for interaction functionals that may be unbounded and whose range is not assumed to be uniformly bounded. Indeed, our typical interaction admits an a.s. finite but random range. Secondly, the random marks-attached to the locations in R-d-belong to a general normed space G. They are not bounded, but their law should admit a super-exponential moment. The approach used here relies on the so-called entropy method and large-deviation tools in order to prove tightness of a family of finite-volume Gibbs point processes. An application to infinite-dimensional interacting diffusions is also presented.
Let M be a compact manifold of dimension n. In this paper, we introduce the Mass Function a >= 0 bar right arrow X-+(M)(a) (resp. a >= 0 bar right arrow X--(M)(a)) which is defined as the supremum (resp. infimum) of the masses of all metrics on M whose Yamabe constant is larger than a and which are flat on a ball of radius 1 and centered at a point p is an element of M. Here, the mass of a metric flat around p is the constant term in the expansion of the Green function of the conformal Laplacian at p. We show that these functions are well defined and have many properties which allow to obtain applications to the Yamabe invariant (i.e. the supremum of Yamabe constants over the set of all metrics on M).
The Willmore functional is a function that maps an immersed Riemannian manifold to its total mean curvature. Finding closed surfaces that minimizes the Willmore energy, or more generally finding critical surfaces, is a classic problem of differential geometry.
In this thesis we will develop the concept of generalized Willmore functionals for surfaces in Riemannian manifolds. We are guided by models in mathematical physics, such as the Hawking energy of general relativity and the bending energies for thin membranes.
We prove the existence of minimizers under area constraint for these generalized Willmore functionals in a suitable class of generalized surfaces. In particular, we construct minimizers of the bending energy mentioned above for prescribed area and enclosed volume.
Furthermore, we prove that critical surfaces of generalized Willmore functionals with prescribed area are smooth, away from finitely many points. These results and the following are based on the existing theory for the Willmore functional.
This general discussion is succeeded by a detailed analysis of the Hawking energy. In the context of general relativity the surrounding manifold describes the space at a given time, hence we strive to understand the interplay between the Hawking energy and the ambient space. We characterize points in the surrounding manifold for which there are small critical spheres with prescribed area in any neighborhood. These points are interpreted as concentration points of the Hawking energy.
Additionally, we calculate an expansion of the Hawking energy on small, round spheres. This allows us to identify a kind of energy density of the Hawking energy.
It needs to be mentioned that our results stand in contrast to previous expansions of the Hawking energy. However, these expansions are obtained on spheres along the light cone at a given point. At this point it is not clear how to explain the discrepancy.
Finally, we consider asymptotically Schwarzschild manifolds. They are a special case of asymptotically flat manifolds, which serf as models for isolated systems. The Schwarzschild spacetime itself is a classical solution to the Einstein equations and yields a simple description of a black hole.
In these asymptotically Schwarzschild manifolds we construct a foliation of the exterior region by critical spheres of the Hawking energy with prescribed large area. This foliation can be seen as a generalized notion of the center of mass of the isolated system. Additionally, the Hawking energy of grows along the foliation as the area of the surfaces grows.
Synthetic Aperture Radar (SAR) amplitude measurements from spaceborne sensors are sensitive to surface roughness conditions near their radar wavelength. These backscatter signals are often exploited to assess the roughness of plowed agricultural fields and water surfaces, and less so to complex, heterogeneous geological surfaces. The bedload of mixed sand- and gravel-bed rivers can be considered a mixture of smooth (compacted sand) and rough (gravel) surfaces. Here, we assess backscatter gradients over a large high-mountain alluvial river in the eastern Central Andes with aerially exposed sand and gravel bedload using X-band TerraSAR-X/TanDEM-X, C-band Sentinel-1, and L-band ALOS-2 PALSAR-2 radar scenes. In a first step, we present theory and hypotheses regarding radar response to an alluvial channel bed. We test our hypotheses by comparing backscatter responses over vegetation-free endmember surfaces from inside and outside of the active channel-bed area. We then develop methods to extract smoothed backscatter gradients downstream along the channel using kernel density estimates. In a final step, the local variability of sand-dominated patches is analyzed using Fourier frequency analysis, by fitting stretched-exponential and power-law regression models to the 2-D power spectrum of backscatter amplitude. We find a large range in backscatter depending on the heterogeneity of contiguous smooth- and rough-patches of bedload material. The SAR amplitude signal responds primarily to the fraction of smooth-sand bedload, but is further modified by gravel elements. The sensitivity to gravel is more apparent in longer wavelength L-band radar, whereas C- and X-band is sensitive only to sand variability. Because the spatial extent of smooth sand patches in our study area is typically< 50 m, only higher resolution sensors (e.g., TerraSAR-X/TanDEM-X) are useful for power spectrum analysis. Our results show the potential for mapping sand-gravel transitions and local geomorphic complexity in alluvial rivers with aerially exposed bedload using SAR amplitude.