510 Mathematik
Refine
Year of publication
Document Type
- Preprint (373)
- Article (265)
- Doctoral Thesis (77)
- Postprint (45)
- Monograph/Edited Volume (13)
- Other (10)
- Master's Thesis (6)
- Part of a Book (5)
- Conference Proceeding (5)
- Review (3)
Language
- English (756)
- German (46)
- French (3)
- Multiple languages (1)
Keywords
- random point processes (18)
- statistical mechanics (18)
- stochastic analysis (18)
- index (14)
- boundary value problems (12)
- Fredholm property (10)
- regularization (10)
- cluster expansion (9)
- elliptic operators (9)
- data assimilation (8)
Institute
- Institut für Mathematik (742)
- Extern (14)
- Institut für Physik und Astronomie (14)
- Mathematisch-Naturwissenschaftliche Fakultät (14)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (7)
- Institut für Biochemie und Biologie (6)
- Institut für Informatik und Computational Science (5)
- Department Psychologie (4)
- Department Grundschulpädagogik (3)
- Hasso-Plattner-Institut für Digital Engineering GmbH (3)
Hardy inequalities on graphs
(2024)
The dissertation deals with a central inequality of non-linear potential theory, the Hardy inequality. It states that the non-linear energy functional can be estimated from below by a pth power of a weighted p-norm, p>1. The energy functional consists of a divergence part and an arbitrary potential part. Locally summable infinite graphs were chosen as the underlying space. Previous publications on Hardy inequalities on graphs have mainly considered the special case p=2, or locally finite graphs without a potential part.
Two fundamental questions now arise quite naturally: For which graphs is there a Hardy inequality at all? And, if it exists, is there a way to obtain an optimal weight? Answers to these questions are given in Theorem 10.1 and Theorem 12.1. Theorem 10.1 gives a number of characterizations; among others, there is a Hardy inequality on a graph if and only if there is a Green's function. Theorem 12.1 gives an explicit formula to compute optimal Hardy weights for locally finite graphs under some additional technical assumptions. Examples show that Green's functions are good candidates to be used in the formula.
Emphasis is also placed on illustrating the theory with examples. The focus is on natural numbers, Euclidean lattices, trees and star graphs. Finally, a non-linear version of the Heisenberg uncertainty principle and a Rellich inequality are derived from the Hardy inequality.
We present general existence and uniqueness results for marked models with pair interactions, exemplified through Gibbs point processes on path space.
More precisely, we study a class of infinite-dimensional diffusions under Gibbsian interactions, in the context of marked point configurations: the starting points belong to R-d, and the marks are the paths of Langevin diffusions.
We use the entropy method to prove existence of an infinite-volume Gibbs point process and use cluster expansion tools to provide an explicit activity domain in which uniqueness holds.
Mathematical modelling and statistical inference provide a framework to evaluate different non-pharmaceutical and pharmaceutical interventions for the control of epidemics that has been widely used during the COVID-19 pandemic. In this paper, lessons learned from this and previous epidemics are used to highlight the challenges for future pandemic control. We consider the availability and use of data, as well as the need for correct parameterisation and calibration for different model frameworks. We discuss challenges that arise in describing and distinguishing between different interventions, within different modelling structures, and allowing both within and between host dynamics. We also highlight challenges in modelling the health economic and political aspects of interventions. Given the diversity of these challenges, a broad variety of interdisciplinary expertise is needed to address them, combining mathematical knowledge with biological and social insights, and including health economics and communication skills. Addressing these challenges for the future requires strong cross disciplinary collaboration together with close communication between scientists and policy makers.
This paper deals with the long-term behavior of positive operator semigroups on spaces of bounded functions and of signed measures, which have applications to parabolic equations with unbounded coefficients and to stochas-tic analysis. The main results are a Tauberian type theorem characterizing the convergence to equilibrium of strongly Feller semigroups and a generalization of a classical convergence theorem of Doob. None of these results requires any kind of time regularity of the semigroup.
We present a Reduced Order Model (ROM) which exploits recent developments in Physics Informed Neural Networks (PINNs) for solving inverse problems for the Navier-Stokes equations (NSE). In the proposed approach, the presence of simulated data for the fluid dynamics fields is assumed. A POD-Galerkin ROM is then constructed by applying POD on the snapshots matrices of the fluid fields and performing a Galerkin projection of the NSE (or the modified equations in case of turbulence modeling) onto the POD reduced basis. A POD-Galerkin PINN ROM is then derived by introducing deep neural networks which approximate the reduced outputs with the input being time and/or parameters of the model. The neural networks incorporate the physical equations (the POD-Galerkin reduced equations) into their structure as part of the loss function. Using this approach, the reduced model is able to approximate unknown parameters such as physical constants or the boundary conditions. A demonstration of the applicability of the proposed ROM is illustrated by three cases which are the steady flow around a backward step, the flow around a circular cylinder and the unsteady turbulent flow around a surface mounted cubic obstacle.
The past three decades of policy process studies have seen the emergence of a clear intellectual lineage with regard to complexity. Implicitly or explicitly, scholars have employed complexity theory to examine the intricate dynamics of collective action in political contexts. However, the methodological counterparts to complexity theory, such as computational methods, are rarely used and, even if they are, they are often detached from established policy process theory. Building on a critical review of the application of complexity theory to policy process studies, we present and implement a baseline model of policy processes using the logic of coevolving networks. Our model suggests that an actor's influence depends on their environment and on exogenous events facilitating dialogue and consensus-building. Our results validate previous opinion dynamics models and generate novel patterns. Our discussion provides ground for further research and outlines the path for the field to achieve a computational turn.
Instruments for measuring the absorbed dose and dose rate under radiation exposure, known as radiation dosimeters, are indispensable in space missions. They are composed of radiation sensors that generate current or voltage response when exposed to ionizing radiation, and processing electronics for computing the absorbed dose and dose rate. Among a wide range of existing radiation sensors, the Radiation Sensitive Field Effect Transistors (RADFETs) have unique advantages for absorbed dose measurement, and a proven record of successful exploitation in space missions. It has been shown that the RADFETs may be also used for the dose rate monitoring. In that regard, we propose a unique design concept that supports the simultaneous operation of a single RADFET as absorbed dose and dose rate monitor. This enables to reduce the cost of implementation, since the need for other types of radiation sensors can be minimized or eliminated. For processing the RADFET's response we propose a readout system composed of analog signal conditioner (ASC) and a self-adaptive multiprocessing system-on-chip (MPSoC). The soft error rate of MPSoC is monitored in real time with embedded sensors, allowing the autonomous switching between three operating modes (high-performance, de-stress and fault-tolerant), according to the application requirements and radiation conditions.
In this work we consider the first encounter problems between a fixed and/or mobile target A and a moving trap B on Bethe lattices and Cayley trees. The survival probabilities (SPs) of the target A on the both kinds of structures are considered analytically and compared. On Bethe lattices, the results show that the fixed target will still prolong its survival time, whereas, on Cayley trees, there are some initial positions where the target should move to prolong its survival time. The mean first encounter time (MFET) for mobile target A is evaluated numerically and compared with the mean first passage time (MFPT) for the fixed target A. Different initial settings are addressed and clear boundaries are obtained. These findings are helpful for optimizing the strategy to prolong the survival time of the target or to speed up the search process on Cayley trees, in relation to the target's movement and the initial position configuration of the two walkers. We also present a new method, which uses a small amount of memory, for simulating random walks on Cayley trees. (C) 2020 Elsevier B.V. All rights reserved.
An instance of the marriage problem is given by a graph G = (A boolean OR B, E), together with, for each vertex of G, a strict preference order over its neighbors. A matching M of G is popular in the marriage instance if M does not lose a head-to-head election against any matching where vertices are voters. Every stable matching is a min-size popular matching; another subclass of popular matchings that always exists and can be easily computed is the set of dominant matchings. A popular matching M is dominant if M wins the head-to-head election against any larger matching. Thus, every dominant matching is a max-size popular matching, and it is known that the set of dominant matchings is the linear image of the set of stable matchings in an auxiliary graph. Results from the literature seem to suggest that stable and dominant matchings behave, from a complexity theory point of view, in a very similar manner within the class of popular matchings. The goal of this paper is to show that there are instead differences in the tractability of stable and dominant matchings and to investigate further their importance for popular matchings. First, we show that it is easy to check if all popular matchings are also stable; however, it is co-NP hard to check if all popular matchings are also dominant. Second, we show how some new and recent hardness results on popular matching problems can be deduced from the NP-hardness of certain problems on stable matchings, also studied in this paper, thus showing that stable matchings can be employed to show not only positive results on popular matchings (as is known) but also most negative ones. Problems for which we show new hardness results include finding a min-size (resp., max-size) popular matching that is not stable (resp., dominant). A known result for which we give a new and simple proof is the NP-hardness of finding a popular matching when G is nonbipartite.
The Levenberg–Marquardt regularization for the backward heat equation with fractional derivative
(2022)
The backward heat problem with time-fractional derivative in Caputo's sense is studied. The inverse problem is severely ill-posed in the case when the fractional order is close to unity. A Levenberg-Marquardt method with a new a posteriori stopping rule is investigated. We show that optimal order can be obtained for the proposed method under a Hölder-type source condition. Numerical examples for one and two dimensions are provided.
Congenital adrenal hyperplasia (CAH) is the most common form of adrenal insufficiency in childhood; it requires cortisol replacement therapy with hydrocortisone (HC, synthetic cortisol) from birth and therapy monitoring for successful treatment. In children, the less invasive dried blood spot (DBS) sampling with whole blood including red blood cells (RBCs) provides an advantageous alternative to plasma sampling.
Potential differences in binding/association processes between plasma and DBS however need to be considered to correctly interpret DBS measurements for therapy monitoring. While capillary DBS samples would be used in clinical practice, venous cortisol DBS samples from children with adrenal insufficiency were analyzed due to data availability and to directly compare and thus understand potential differences between venous DBS and plasma. A previously published HC plasma pharmacokinetic (PK) model was extended by leveraging these DBS concentrations.
In addition to previously characterized binding of cortisol to albumin (linear process) and corticosteroid-binding globulin (CBG; saturable process), DBS data enabled the characterization of a linear cortisol association with RBCs, and thereby providing a quantitative link between DBS and plasma cortisol concentrations. The ratio between the observed cortisol plasma and DBS concentrations varies highly from 2 to 8. Deterministic simulations of the different cortisol binding/association fractions demonstrated that with higher blood cortisol concentrations, saturation of cortisol binding to CBG was observed, leading to an increase in all other cortisol binding fractions.
In conclusion, a mathematical PK model was developed which links DBS measurements to plasma exposure and thus allows for quantitative interpretation of measurements of DBS samples.
In this article we prove upper bounds for the Laplace eigenvalues lambda(k) below the essential spectrum for strictly negatively curved Cartan-Hadamard manifolds. Our bound is given in terms of k(2) and specific geometric data of the manifold. This applies also to the particular case of non-compact manifolds whose sectional curvature tends to -infinity, where no essential spectrum is present due to a theorem of Donnelly/Li. The result stands in clear contrast to Laplacians on graphs where such a bound fails to be true in general.
Diffusion maps is a manifold learning algorithm widely used for dimensionality reduction. Using a sample from a distribution, it approximates the eigenvalues and eigenfunctions of associated Laplace-Beltrami operators. Theoretical bounds on the approximation error are, however, generally much weaker than the rates that are seen in practice. This paper uses new approaches to improve the error bounds in the model case where the distribution is supported on a hypertorus. For the data sampling (variance) component of the error we make spatially localized compact embedding estimates on certain Hardy spaces; we study the deterministic (bias) component as a perturbation of the Laplace-Beltrami operator's associated PDE and apply relevant spectral stability results. Using these approaches, we match long-standing pointwise error bounds for both the spectral data and the norm convergence of the operator discretization. We also introduce an alternative normalization for diffusion maps based on Sinkhorn weights. This normalization approximates a Langevin diffusion on the sample and yields a symmetric operator approximation. We prove that it has better convergence compared with the standard normalization on flat domains, and we present a highly efficient rigorous algorithm to compute the Sinkhorn weights.
Our input is a complete graph G on n vertices where each vertex has a strict ranking of all other vertices in G. The goal is to construct a matching in G that is popular. A matching M is popular if M does not lose a head-to-head election against any matching M ': here each vertex casts a vote for the matching in {M,M '} in which it gets a better assignment. Popular matchings need not exist in the given instance G and the popular matching problem is to decide whether one exists or not. The popular matching problem in G is easy to solve for odd n. Surprisingly, the problem becomes NP-complete for even n, as we show here. This is one of the few graph theoretic problems efficiently solvable when n has one parity and NP-complete when n has the other parity.
We establish a new approach of treating elliptic boundary value problems (BVPs) on manifolds with boundary and regular corners, up to singularity order 2. Ellipticity and parametrices are obtained in terms of symbols taking values in algebras of BVPs on manifolds of corresponding lower singularity orders. Those refer to Boutet de Monvel's calculus of operators with the transmission property, see Boutet de Monvel (Acta Math 126:11-51, 1971) for the case of smooth boundary. On corner configuration operators act in spaces with multiple weights. We mainly study the case of upper left entries in the respective 2 x 2 operator block-matrices of such a calculus. Green operators in the sense of Boutet de Monvel (Acta Math 126:11-51, 1971) analogously appear in singular cases, and they are complemented by contributions of Mellin type. We formulate a result on ellipticity and the Fredholm property in weighted corner spaces, with parametrices of analogous kind.
The spatio-temporal epidemic type aftershock sequence (ETAS) model is widely used to describe the self-exciting nature of earthquake occurrences. While traditional inference methods provide only point estimates of the model parameters, we aim at a fully Bayesian treatment of model inference, allowing naturally to incorporate prior knowledge and uncertainty quantification of the resulting estimates. Therefore, we introduce a highly flexible, non-parametric representation for the spatially varying ETAS background intensity through a Gaussian process (GP) prior. Combined with classical triggering functions this results in a new model formulation, namely the GP-ETAS model. We enable tractable and efficient Gibbs sampling by deriving an augmented form of the GP-ETAS inference problem. This novel sampling approach allows us to assess the posterior model variables conditioned on observed earthquake catalogues, i.e., the spatial background intensity and the parameters of the triggering function. Empirical results on two synthetic data sets indicate that GP-ETAS outperforms standard models and thus demonstrate the predictive power for observed earthquake catalogues including uncertainty quantification for the estimated parameters. Finally, a case study for the l'Aquila region, Italy, with the devastating event on 6 April 2009, is presented.
The Arnoldi process can be applied to inexpensively approximate matrix functions of the form f (A)v and matrix functionals of the form v*(f (A))*g(A)v, where A is a large square non-Hermitian matrix, v is a vector, and the superscript * denotes transposition and complex conjugation. Here f and g are analytic functions that are defined in suitable regions in the complex plane. This paper reviews available approximation methods and describes new ones that provide higher accuracy for essentially the same computational effort by exploiting available, but generally not used, moment information. Numerical experiments show that in some cases the modifications of the Arnoldi decompositions proposed can improve the accuracy of v*(f (A))*g(A)v about as much as performing an additional step of the Arnoldi process.
Hidden semi-Markov models generalise hidden Markov models by explicitly modelling the time spent in a given state, the so-called dwell time, using some distribution defined on the natural numbers. While the (shifted) Poisson and negative binomial distribution provide natural choices for such distributions, in practice, parametric distributions can lack the flexibility to adequately model the dwell times. To overcome this problem, a penalised maximum likelihood approach is proposed that allows for a flexible and data-driven estimation of the dwell-time distributions without the need to make any distributional assumption. This approach is suitable for direct modelling purposes or as an exploratory tool to investigate the latent state dynamics. The feasibility and potential of the suggested approach is illustrated in a simulation study and by modelling muskox movements in northeast Greenland using GPS tracking data. The proposed method is implemented in the R-package PHSMM which is available on CRAN.
In this paper, we develop the mathematical tools needed to explore isotopy classes of tilings on hyperbolic surfaces of finite genus, possibly nonorientable, with boundary, and punctured. More specifically, we generalize results on Delaney-Dress combinatorial tiling theory using an extension of mapping class groups to orbifolds, in turn using this to study tilings of covering spaces of orbifolds. Moreover, we study finite subgroups of these mapping class groups. Our results can be used to extend the Delaney-Dress combinatorial encoding of a tiling to yield a finite symbol encoding the complexity of an isotopy class of tilings. The results of this paper provide the basis for a complete and unambiguous enumeration of isotopically distinct tilings of hyperbolic surfaces.