Refine
Year of publication
Document Type
- Article (1092)
- Monograph/Edited Volume (427)
- Preprint (378)
- Doctoral Thesis (152)
- Other (46)
- Postprint (32)
- Review (16)
- Conference Proceeding (9)
- Master's Thesis (8)
- Part of a Book (3)
Language
- English (1890)
- German (265)
- French (7)
- Italian (3)
- Multiple languages (1)
Keywords
- random point processes (19)
- statistical mechanics (19)
- stochastic analysis (19)
- index (14)
- Fredholm property (12)
- boundary value problems (12)
- cluster expansion (10)
- data assimilation (10)
- regularization (10)
- elliptic operators (9)
Institute
- Institut für Mathematik (2166) (remove)
Hardy inequalities on graphs
(2024)
The dissertation deals with a central inequality of non-linear potential theory, the Hardy inequality. It states that the non-linear energy functional can be estimated from below by a pth power of a weighted p-norm, p>1. The energy functional consists of a divergence part and an arbitrary potential part. Locally summable infinite graphs were chosen as the underlying space. Previous publications on Hardy inequalities on graphs have mainly considered the special case p=2, or locally finite graphs without a potential part.
Two fundamental questions now arise quite naturally: For which graphs is there a Hardy inequality at all? And, if it exists, is there a way to obtain an optimal weight? Answers to these questions are given in Theorem 10.1 and Theorem 12.1. Theorem 10.1 gives a number of characterizations; among others, there is a Hardy inequality on a graph if and only if there is a Green's function. Theorem 12.1 gives an explicit formula to compute optimal Hardy weights for locally finite graphs under some additional technical assumptions. Examples show that Green's functions are good candidates to be used in the formula.
Emphasis is also placed on illustrating the theory with examples. The focus is on natural numbers, Euclidean lattices, trees and star graphs. Finally, a non-linear version of the Heisenberg uncertainty principle and a Rellich inequality are derived from the Hardy inequality.
State space models enjoy wide popularity in mathematical and statistical modelling across disciplines and research fields. Frequent solutions to problems of estimation and forecasting of a latent signal such as the celebrated Kalman filter hereby rely on a set of strong assumptions such as linearity of system dynamics and Gaussianity of noise terms.
We investigate fallacy in mis-specification of the noise terms, that is signal noise
and observation noise, regarding heavy tailedness in that the true dynamic frequently produces observation outliers or abrupt jumps of the signal state due to realizations of these heavy tails not considered by the model. We propose a formalisation of observation noise mis-specification in terms of Huber’s ε-contamination as well as a computationally cheap solution via generalised Bayesian posteriors with a diffusion Stein divergence loss resulting in the diffusion score matching Kalman filter - a modified algorithm akin in complexity to the regular Kalman filter. For this new filter interpretations of novel terms, stability and an ensemble variant are discussed. Regarding signal noise mis-specification, we propose a formalisation in the frame work of change point detection and join ideas from the popular CUSUM algo-
rithm with ideas from Bayesian online change point detection to combine frequent reliability constraints and online inference resulting in a Gaussian mixture model variant of multiple Kalman filters. We hereby exploit open-end sequential probability ratio tests on the evidence of Kalman filters on observation sub-sequences for aggregated inference under notions of plausibility.
Both proposed methods are combined to investigate the double mis-specification problem and discussed regarding their capabilities in reliable and well-tuned uncertainty quantification. Each section provides an introduction to required terminology and tools as well as simulation experiments on the popular target tracking task and the non-linear, chaotic Lorenz-63 system to showcase practical performance of theoretical considerations.
Das Eigene und das Fremde
(2023)
Die vorliegende Arbeit stellt eine Untersuchung des Fremdverstehens von Lehrkräften im Mathematikunterricht dar. Mit ‚Fremdverstehen‘ soll dabei – in Anlehnung an den Soziologen Alfred Schütz – der Prozess bezeichnet werden, in welchem eine Lehrkraft versucht, das Verhalten einer Schülerin oder eines Schülers zu verstehen, indem sie dieses Verhalten auf ein Erleben zurückführt, das ihm zugrunde gelegen haben könnte. Als ein wesentliches Merkmal des Prozesses stellt Schütz in seiner Theorie des Fremdverstehens heraus, dass das Fremdverstehen eines Menschen immer auch auf seinen eigenen Erlebnissen basiert. Aus diesem Grund wird in der Arbeit ein methodischer Zweischritt vorgenommen: Es werden zunächst die mathematikbezogenen Erlebnisse zweier Lehrkräfte nachgezeichnet, bevor dann ihr Fremdverstehen in konkreten Situationen im Mathematikunterricht rekonstruiert wird. In der ersten Teiluntersuchung (= der Rekonstruktion eigener Erlebnisse der untersuchten Lehrkräfte) erfolgt die Datenerhebung mit Hilfe biographisch-narrativer Interviews, in denen die untersuchten Lehrkräfte angeregt werden, ihre mathematikbezogene Lebensgeschichte zu erzählen. Die Analyse dieser Interviews wird im Sinne der rekonstruktiven Fallanalyse vorgenommen. Insgesamt führt die erste Teiluntersuchung zu textlichen Darstellungen der rekonstruierten mathematikbezogenen Lebensgeschichte der untersuchten Mathematiklehrkräfte. In der zweiten Teiluntersuchung (= der Rekonstruktion des Fremdverstehens der untersuchten Lehrkräfte) werden dann narrative Interviews geführt, in denen die untersuchten Lehrkräfte von ihrem Fremdverstehen in konkreten Situationen im Mathematikunterricht erzählen. Die Analyse dieser Interviews erfolgt mit Hilfe eines dreischrittigen Analyseverfahrens, welches die Autorin eigens zum Zweck der Rekonstruktion von Fremdverstehen entwickelte. Am Ende dieser zweiten Teiluntersuchung werden sowohl das rekonstruierte Fremdverstehen der Lehrkräfte in verschiedenen Unterrichtssituationen dargestellt als auch Strukturen, die sich in ihrem Fremdverstehen abzeichnen. Mit Hilfe einer theoretischen Verallgemeinerung werden schließlich – auf Basis der Ergebnisse der zweiten Teiluntersuchung – Aussagen über fünf Merkmale des Fremdverstehens von Lehrkräften im Mathematikunterricht im Allgemeinen gewonnen. Mit diesen Aussagen vermag die Arbeit eine erste Beschreibung davon hervorzubringen, wie sich das Phänomen des Fremdverstehens von Lehrkräften im Mathematikunterricht ausgestalten kann.
Zahlen in den Fingern
(2023)
Die Debatte über den Einsatz von digitalen Werkzeugen in der mathematischen Frühförderung ist hoch aktuell. Lernspiele werden konstruiert, mit dem Ziel, mathematisches, informelles Wissen aufzubauen und so einen besseren Schulstart zu ermöglichen. Doch allein die digitale und spielerische Aufarbeitung führt nicht zwingend zu einem Lernerfolg. Daher ist es umso wichtiger, die konkrete Implementation der theoretischen Konstrukte und Interaktionsmöglichkeiten mit den Werkzeugen zu analysieren und passend aufzubereiten.
In dieser Masterarbeit wird dazu exemplarisch ein mathematisches Lernspiel namens „Fingu“ für den Einsatz im vorschulischen Bereich theoretisch und empirisch im Rahmen der Artifact-Centric Activity Theory (ACAT) untersucht. Dazu werden zunächst die theoretischen Hintergründe zum Zahlensinn, Zahlbegriffserwerb, Teil-Ganze-Verständnis, der Anzahlwahrnehmung und -bestimmung, den Anzahlvergleichen und der Anzahldarstellung mithilfe von Fingern gemäß der Embodied Cognition sowie der Verwendung von digitalen Werkzeugen und Multi-Touch-Geräten umfassend beschrieben. Anschließend wird die App Fingu erklärt und dann theoretisch entlang des ACAT-Review-Guides analysiert. Zuletzt wird die selbstständig durchgeführte Studie mit zehn Vorschulkindern erläutert und darauf aufbauend Verbesserungs- und Entwicklungsmöglichkeiten der App auf wissenschaftlicher Grundlage beigetragen. Für Fingu lässt sich abschließend festhalten, dass viele Prozesse wie die (Quasi-)Simultanerfassung oder das Zählen gefördert werden können, für andere wie das Teil-Ganze-Verständnis aber noch Anpassungen und/oder die Begleitung durch Erwachsene nötig ist.
Non-local boundary conditions for the spin Dirac operator on spacetimes with timelike boundary
(2023)
Non-local boundary conditions – for example the Atiyah–Patodi–Singer (APS) conditions – for Dirac operators on Riemannian manifolds are rather well-understood, while not much is known for such operators on Lorentzian manifolds. Recently, Bär and Strohmaier [15] and Drago, Große, and Murro [27] introduced APS-like conditions for the spin Dirac operator on Lorentzian manifolds with spacelike and timelike boundary, respectively. While Bär and Strohmaier [15] showed the Fredholmness of the Dirac operator with these boundary conditions, Drago, Große, and Murro [27] proved the well-posedness of the corresponding initial boundary value problem under certain geometric assumptions.
In this thesis, we will follow the footsteps of the latter authors and discuss whether the APS-like conditions for Dirac operators on Lorentzian manifolds with timelike boundary can be replaced by more general conditions such that the associated initial boundary value problems are still wellposed.
We consider boundary conditions that are local in time and non-local in the spatial directions. More precisely, we use the spacetime foliation arising from the Cauchy temporal function and split the Dirac operator along this foliation. This gives rise to a family of elliptic operators each acting on spinors of the spin bundle over the corresponding timeslice. The theory of elliptic operators then ensures that we can find families of non-local boundary conditions with respect to this family of operators. Proceeding, we use such a family of boundary conditions to define a Lorentzian boundary condition on the whole timelike boundary. By analyzing the properties of the Lorentzian boundary conditions, we then find sufficient conditions on the family of non-local boundary conditions that lead to the well-posedness of the corresponding Cauchy problems. The well-posedness itself will then be proven by using classical tools including energy estimates and approximation by solutions of the regularized problems.
Moreover, we use this theory to construct explicit boundary conditions for the Lorentzian Dirac operator. More precisely, we will discuss two examples of boundary conditions – the analogue of the Atiyah–Patodi–Singer and the chirality conditions, respectively, in our setting. For doing this, we will have a closer look at the theory of non-local boundary conditions for elliptic operators and analyze the requirements on the family of non-local boundary conditions for these specific examples.
This thesis bridges two areas of mathematics, algebra on the one hand with the Milnor-Moore theorem (also called Cartier-Quillen-Milnor-Moore theorem) as well as the Poincaré-Birkhoff-Witt theorem, and analysis on the other hand with Shintani zeta functions which generalise multiple zeta functions.
The first part is devoted to an algebraic formulation of the locality principle in physics and generalisations of classification theorems such as Milnor-Moore and Poincaré-Birkhoff-Witt theorems to the locality framework. The locality principle roughly says that events that take place far apart in spacetime do not infuence each other. The algebraic formulation of this principle discussed here is useful when analysing singularities which arise from events located far apart in space, in order to renormalise them while keeping a memory of the fact that they do not influence each other. We start by endowing a vector space with a symmetric relation, named the locality relation, which keeps track of elements that are "locally independent". The pair of a vector space together with such relation is called a pre-locality vector space. This concept is extended to tensor products allowing only tensors made of locally independent elements. We extend this concept to the locality tensor algebra, and locality symmetric algebra of a pre-locality vector space and prove the universal properties of each of such structures. We also introduce the pre-locality Lie algebras, together with their associated locality universal enveloping algebras and prove their universal property. We later upgrade all such structures and results from the pre-locality to the locality context, requiring the locality relation to be compatible with the linear structure of the vector space. This allows us to define locality coalgebras, locality bialgebras, and locality Hopf algebras. Finally, all the previous results are used to prove the locality version of the Milnor-Moore and the Poincaré-Birkhoff-Witt theorems. It is worth noticing that the proofs presented, not only generalise the results in the usual (non-locality) setup, but also often use less tools than their counterparts in their non-locality counterparts.
The second part is devoted to study the polar structure of the Shintani zeta functions. Such functions, which generalise the Riemman zeta function, multiple zeta functions, Mordell-Tornheim zeta functions, among others, are parametrised by matrices with real non-negative arguments. It is known that Shintani zeta functions extend to meromorphic functions with poles on afine hyperplanes. We refine this result in showing that the poles lie on hyperplanes parallel to the facets of certain convex polyhedra associated to the defining matrix for the Shintani zeta function. Explicitly, the latter are the Newton polytopes of the polynomials induced by the columns of the underlying matrix. We then prove that the coeficients of the equation which describes the hyperplanes in the canonical basis are either zero or one, similar to the poles arising when renormalising generic Feynman amplitudes. For that purpose, we introduce an algorithm to distribute weight over a graph such that the weight at each vertex satisfies a given lower bound.
Point processes are a common methodology to model sets of events. From earthquakes to social media posts, from the arrival times of neuronal spikes to the timing of crimes, from stock prices to disease spreading -- these phenomena can be reduced to the occurrences of events concentrated in points. Often, these events happen one after the other defining a time--series.
Models of point processes can be used to deepen our understanding of such events and for classification and prediction. Such models include an underlying random process that generates the events. This work uses Bayesian methodology to infer the underlying generative process from observed data. Our contribution is twofold -- we develop new models and new inference methods for these processes.
We propose a model that extends the family of point processes where the occurrence of an event depends on the previous events. This family is known as Hawkes processes. Whereas in most existing models of such processes, past events are assumed to have only an excitatory effect on future events, we focus on the newly developed nonlinear Hawkes process, where past events could have excitatory and inhibitory effects. After defining the model, we present its inference method and apply it to data from different fields, among others, to neuronal activity.
The second model described in the thesis concerns a specific instance of point processes --- the decision process underlying human gaze control. This process results in a series of fixated locations in an image. We developed a new model to describe this process, motivated by the known Exploration--Exploitation dilemma. Alongside the model, we present a Bayesian inference algorithm to infer the model parameters.
Remaining in the realm of human scene viewing, we identify the lack of best practices for Bayesian inference in this field. We survey four popular algorithms and compare their performances for parameter inference in two scan path models.
The novel models and inference algorithms presented in this dissertation enrich the understanding of point process data and allow us to uncover meaningful insights.
Übungsbuch zur Stochastik
(2023)
Dieses Buch stellt Übungen zu den Grundbegriffen und Grundsätzen der Stochastik und ihre Lösungen zur Verfügung. So wie man Tonleitern in der Musik trainiert, so berechnet man Übungsaufgaben in der Mathematik. In diesem Sinne soll dieses Übungsbuch vor allem als Vorlage dienen für das eigenständige, eigenverantwortliche Lernen und Üben.
Die Schönheit und Einzigartigkeit der Wahrscheinlichkeitstheorie besteht darin, dass sie eine Vielzahl von realen Phänomenen modellieren kann. Daher findet man hier Aufgaben mit Verbindungen zur Geometrie, zu Glücksspielen, zur Versicherungsmathematik, zur Demographie und vielen anderen Themen.
Das Mathematik-Teilprojekt SPIES-M zielt auf eine stärkere Professionsorientierung und die Verknüpfung von Fachwissenschaft und Fachdidaktik in der universitären Lehrkräftebildung. Zu allen großen Inhaltsgebieten der Mathematik wurden neue Lehrveranstaltungen konzipiert und in den Studienordnungen sämtlicher Lehrämter Mathematik an der Universität Potsdam implementiert. Für die Konzeption wurden theoriebasiert Gestaltungsprinzipien herausgearbeitet, die sowohl für das Design als auch für die Evaluation und Weiterentwicklung der Lehrveranstaltungen nach dem Design-Research-Ansatz genutzt werden können. Die Umsetzung der Gestaltungsprinzipien wird am Beispiel der Fundamentalen Idee der Proportionalität verdeutlicht und dabei aufgezeigt, wie Studierende dazu befähigt werden können, fachdidaktisches Wissen aus fachmathematischen Inhalten zu generieren. Die Entwicklung des Professionswissens der Studierenden wird mithilfe unterschiedlicher Instrumente untersucht, um Rückschlüsse auf die Wirksamkeit der neu konzipierten Lehrveranstaltungen zu ziehen. Für die Untersuchungen im Mixed-Methods-Design werden neben Beobachtungen in Lehrveranstaltungen eigens konzipierte Wissenstests, Gruppeninterviews, Unterrichtsentwürfe aus Praxisphasen und Lerntagebücher genutzt. Die Studierendenperspektive wird durch Befragungen zur wahrgenommenen (Berufs-)Relevanz der Lehrveranstaltungen erhoben. Weiteres wesentliches Element der Begleitforschung ist die kollegiale Supervision durch sogenannte „Spies“ (Spione), die die Veranstaltungen kriteriengeleitet beobachten und anschließend gemeinsam mit den Dozierenden reflektieren. Die bisherigen Ergebnisse werden hier präsentiert und hinsichtlich ihrer Implikationen diskutiert. Die im Projekt entwickelten Gestaltungsprinzipien als Werkzeug für Design und Evaluation sowie das Spies-Konzept der kollegialen Supervision werden für die Qualitätsentwicklung von Lehrveranstaltungen zum Transfer vorgeschlagen.
In this paper we consider surfaces which are critical points of the Willmore functional subject to constrained area.
In the case of small area we calculate the corrections to the intrinsic geometry induced by the ambient curvature.
These estimates together with the choice of an adapted geometric center of mass lead to refined position estimates in relation to the scalar curvature of the ambient manifold.
Cell-level systems biology model to study inflammatory bowel diseases and their treatment options
(2023)
To help understand the complex and therapeutically challenging inflammatory bowel diseases (IBDs), we developed a systems biology model of the intestinal immune system that is able to describe main aspects of IBD and different treatment modalities thereof. The model, including key cell types and processes of the mucosal immune response, compiles a large amount of isolated experimental findings from literature into a larger context and allows for simulations of different inflammation scenarios based on the underlying data and assumptions. In the context of a large and diverse virtual IBD population, we characterized the patients based on their phenotype (in contrast to healthy individuals, they developed persistent inflammation after a trigger event) rather than on a priori assumptions on parameter differences to a healthy individual. This allowed to reproduce the enormous diversity of predispositions known to lead to IBD. Analyzing different treatment effects, the model provides insight into characteristics of individual drug therapy. We illustrate for anti-TNF-alpha therapy, how the model can be used (i) to decide for alternative treatments with best prospects in the case of nonresponse, and (ii) to identify promising combination therapies with other available treatment options.
Amoeboid cell motility takes place in a variety of biomedical processes such as cancer metastasis, embryonic morphogenesis, and wound healing. In contrast to other forms of cell motility, it is mainly driven by substantial cell shape changes. Based on the interplay of explorative membrane protrusions at the front and a slower-acting membrane retraction at the rear, the cell moves in a crawling kind of way. Underlying these protrusions and retractions are multiple physiological processes resulting in changes of the cytoskeleton, a meshwork of different multi-functional proteins. The complexity and versatility of amoeboid cell motility raise the need for novel computational models based on a profound theoretical framework to analyze and simulate the dynamics of the cell shape.
The objective of this thesis is the development of (i) a mathematical framework to describe contour dynamics in time and space, (ii) a computational model to infer expansion and retraction characteristics of individual cell tracks and to produce realistic contour dynamics, (iii) and a complementing Open Science approach to make the above methods fully accessible and easy to use.
In this work, we mainly used single-cell recordings of the model organism Dictyostelium discoideum. Based on stacks of segmented microscopy images, we apply a Bayesian approach to obtain smooth representations of the cell membrane, so-called cell contours. We introduce a one-parameter family of regularized contour flows to track reference points on the contour (virtual markers) in time and space. This way, we define a coordinate system to visualize local geometric and dynamic quantities of individual contour dynamics in so-called kymograph plots. In particular, we introduce the local marker dispersion as a measure to identify membrane protrusions and retractions in a fully automated way.
This mathematical framework is the basis of a novel contour dynamics model, which consists of three biophysiologically motivated components: one stochastic term, accounting for membrane protrusions, and two deterministic terms to control the shape and area of the contour, which account for membrane retractions. Our model provides a fully automated approach to infer protrusion and retraction characteristics from experimental cell tracks while being also capable of simulating realistic and qualitatively different contour dynamics. Furthermore, the model is used to classify two different locomotion types: the amoeboid and a so-called fan-shaped type.
With the complementing Open Science approach, we ensure a high standard regarding the usability of our methods and the reproducibility of our research. In this context, we introduce our software publication named AmoePy, an open-source Python package to segment, analyze, and simulate amoeboid cell motility. Furthermore, we describe measures to improve its usability and extensibility, e.g., by detailed run instructions and an automatically generated source code documentation, and to ensure its functionality and stability, e.g., by automatic software tests, data validation, and a hierarchical package structure.
The mathematical approaches of this work provide substantial improvements regarding the modeling and analysis of amoeboid cell motility. We deem the above methods, due to their generalized nature, to be of greater value for other scientific applications, e.g., varying organisms and experimental setups or the transition from unicellular to multicellular movement. Furthermore, we enable other researchers from different fields, i.e., mathematics, biophysics, and medicine, to apply our mathematical methods. By following Open Science standards, this work is of greater value for the cell migration community and a potential role model for other Open Science contributions.
We present a Reduced Order Model (ROM) which exploits recent developments in Physics Informed Neural Networks (PINNs) for solving inverse problems for the Navier-Stokes equations (NSE). In the proposed approach, the presence of simulated data for the fluid dynamics fields is assumed. A POD-Galerkin ROM is then constructed by applying POD on the snapshots matrices of the fluid fields and performing a Galerkin projection of the NSE (or the modified equations in case of turbulence modeling) onto the POD reduced basis. A POD-Galerkin PINN ROM is then derived by introducing deep neural networks which approximate the reduced outputs with the input being time and/or parameters of the model. The neural networks incorporate the physical equations (the POD-Galerkin reduced equations) into their structure as part of the loss function. Using this approach, the reduced model is able to approximate unknown parameters such as physical constants or the boundary conditions. A demonstration of the applicability of the proposed ROM is illustrated by three cases which are the steady flow around a backward step, the flow around a circular cylinder and the unsteady turbulent flow around a surface mounted cubic obstacle.
Introduction:
Hydrocortisone is the standard of care in cortisol replacement therapy for congenital adrenal hyperplasia patients. Challenges in mimicking cortisol circadian rhythm and dosing individualization can be overcome by the support of mathematical modelling. Previously, a non-linear mixed-effects (NLME) model was developed based on clinical hydrocortisone pharmacokinetic (PK) pediatric and adult data. Additionally, a physiologically-based pharmacokinetic (PBPK) model was developed for adults and a pediatric model was obtained using maturation functions for relevant processes. In this work, a middle-out approach was applied. The aim was to investigate whether PBPK-derived maturation functions could provide a better description of hydrocortisone PK inter-individual variability when implemented in the NLME framework, with the goal of providing better individual predictions towards precision dosing at the patient level.
Methods:
Hydrocortisone PK data from 24 adrenal insufficiency pediatric patients and 30 adult healthy volunteers were used for NLME model development, while the PBPK model and maturation functions of clearance and cortisol binding globulin (CBG) were developed based on previous studies published in the literature.
Results:
Clearance (CL) estimates from both approaches were similar for children older than 1 year (CL/F increasing from around 150 L/h to 500 L/h), while CBG concentrations differed across the whole age range (CBG(NLME) stable around 0.5 mu M vs. steady increase from 0.35 to 0.8 mu M for CBG (PBPK)). PBPK-derived maturation functions were subsequently included in the NLME model. After inclusion of the maturation functions, none, a part of, or all parameters were re-estimated. However, the inclusion of CL and/or CBG maturation functions in the NLME model did not result in improved model performance for the CL maturation function (& UDelta;OFV > -15.36) and the re-estimation of parameters using the CBG maturation function most often led to unstable models or individual CL prediction bias.
Discussion:
Three explanations for the observed discrepancies could be postulated, i) non-considered maturation of processes such as absorption or first-pass effect, ii) lack of patients between 1 and 12 months, iii) lack of correction of PBPK CL maturation functions derived from urinary concentration ratio data for the renal function relative to adults. These should be investigated in the future to determine how NLME and PBPK methods can work towards deriving insights into pediatric hydrocortisone PK.
This paper deals with the long-term behavior of positive operator semigroups on spaces of bounded functions and of signed measures, which have applications to parabolic equations with unbounded coefficients and to stochas-tic analysis. The main results are a Tauberian type theorem characterizing the convergence to equilibrium of strongly Feller semigroups and a generalization of a classical convergence theorem of Doob. None of these results requires any kind of time regularity of the semigroup.
The Gutenberg-Richter (GR) and the Omori-Utsu (OU) law describe the earthquakes' energy release and temporal clustering and are thus of great importance for seismic hazard assessment. Motivated by experimental results, which indicate stress-dependent parameters, we consider a combined global data set of 127 main shock-aftershock sequences and perform a systematic study of the relationship between main shock-induced stress changes and associated seismicity patterns. For this purpose, we calculate space-dependent Coulomb Stress (& UDelta;CFS) and alternative receiver-independent stress metrics in the surrounding of the main shocks. Our results indicate a clear positive correlation between the GR b-value and the induced stress, contrasting expectations from laboratory experiments and suggesting a crucial role of structural heterogeneity and strength variations. Furthermore, we demonstrate that the aftershock productivity increases nonlinearly with stress, while the OU parameters c and p systematically decrease for increasing stress changes. Our partly unexpected findings can have an important impact on future estimations of the aftershock hazard.
Deriving mechanism-based pharmacodynamic models by reducing quantitative systems pharmacology models
(2023)
Quantitative systems pharmacology (QSP) models integrate comprehensive qualitative and quantitative knowledge about pharmacologically relevant processes. We previously proposed a first approach to leverage the knowledge in QSP models to derive simpler, mechanism-based pharmacodynamic (PD) models. Their complexity, however, is typically still too large to be used in the population analysis of clinical data. Here, we extend the approach beyond state reduction to also include the simplification of reaction rates, elimination of reactions, and analytic solutions. We additionally ensure that the reduced model maintains a prespecified approximation quality not only for a reference individual but also for a diverse virtual population. We illustrate the extended approach for the warfarin effect on blood coagulation. Using the model-reduction approach, we derive a novel small-scale warfarin/international normalized ratio model and demonstrate its suitability for biomarker identification. Due to the systematic nature of the approach in comparison with empirical model building, the proposed model-reduction algorithm provides an improved rationale to build PD models also from QSP models in other applications.
According to Radzikowski’s celebrated results, bisolutions of a wave operator on a globally hyperbolic spacetime are of the Hadamard form iff they are given by a linear combination of distinguished parametrices i2(G˜aF−G˜F+G˜A−G˜R) in the sense of Duistermaat and Hörmander [Acta Math. 128, 183–269 (1972)] and Radzikowski [Commun. Math. Phys. 179, 529 (1996)]. Inspired by the construction of the corresponding advanced and retarded Green operator GA, GR as done by Bär, Ginoux, and Pfäffle {Wave Equations on Lorentzian Manifolds and Quantization [European Mathematical Society (EMS), Zürich, 2007]}, we construct the remaining two Green operators GF, GaF locally in terms of Hadamard series. Afterward, we provide the global construction of i2(G˜aF−G˜F), which relies on new techniques such as a well-posed Cauchy problem for bisolutions and a patching argument using Čech cohomology. This leads to global bisolutions of the Hadamard form, each of which can be chosen to be a Hadamard two-point-function, i.e., the smooth part can be adapted such that, additionally, the symmetry and the positivity condition are exactly satisfied.
Extreme value statistics is a popular and frequently used tool to model the occurrence of large earthquakes. The problem of poor statistics arising from rare events is addressed by taking advantage of the validity of general statistical properties in asymptotic regimes. In this note, I argue that the use of extreme value statistics for the purpose of practically modeling the tail of the frequency-magnitude distribution of earthquakes can produce biased and thus misleading results because it is unknown to what degree the tail of the true distribution is sampled by data. Using synthetic data allows to quantify this bias in detail. The implicit assumption that the true M-max is close to the maximum observed magnitude M-max,M-observed restricts the class of the potential models a priori to those with M-max = M-max,M-observed + Delta M with an increment Delta M approximate to 0.5... 1.2. This corresponds to the simple heuristic method suggested by Wheeler (2009) and labeled :M-max equals M-obs plus an increment." The incomplete consideration of the entire model family for the frequency-magnitude distribution neglects, however, the scenario of a large so far unobserved earthquake.
In this paper, we examine conditioning of the discretization of the Helmholtz problem. Although the discrete Helmholtz problem has been studied from different perspectives, to the best of our knowledge, there is no conditioning analysis for it. We aim to fill this gap in the literature. We propose a novel method in 1D to observe the near-zero eigenvalues of a symmetric indefinite matrix. Standard classification of ill-conditioning based on the matrix condition number is not true for the discrete Helmholtz problem. We relate the ill-conditioning of the discretization of the Helmholtz problem with the condition number of the matrix. We carry out analytical conditioning analysis in 1D and extend our observations to 2D with numerical observations. We examine several discretizations. We find different regions in which the condition number of the problem shows different characteristics. We also explain the general behavior of the solutions in these regions.
An explicit Dobrushin uniqueness region for Gibbs point processes with repulsive interactions
(2022)
We present a uniqueness result for Gibbs point processes with interactions that come from a non-negative pair potential; in particular, we provide an explicit uniqueness region in terms of activity z and inverse temperature beta. The technique used relies on applying to the continuous setting the classical Dobrushin criterion. We also present a comparison to the two other uniqueness methods of cluster expansion and disagreement percolation, which can also be applied for this type of interaction.
Symmetric, elegantly entangled structures are a curious mathematical construction that has found their way into the heart of the chemistry lab and the toolbox of constructive geometry. Of particular interest are those structures—knots, links and weavings—which are composed locally of simple twisted strands and are globally symmetric. This paper considers the symmetric tangling of multiple 2-periodic honeycomb networks. We do this using a constructive methodology borrowing elements of graph theory, low-dimensional topology and geometry. The result is a wide-ranging enumeration of symmetric tangled honeycomb networks, providing a foundation for their exploration in both the chemistry lab and the geometers toolbox.
Conventional embeddings of the edge-graphs of Platonic polyhedra, {f,z}, where f,z denote the number of edges in each face and the edge-valence at each vertex, respectively, are untangled in that they can be placed on a sphere (S-2) such that distinct edges do not intersect, analogous to unknotted loops, which allow crossing-free drawings of S-1 on the sphere. The most symmetric (flag-transitive) realizations of those polyhedral graphs are those of the classical Platonic polyhedra, whose symmetries are *2fz, according to Conway's two-dimensional (2D) orbifold notation (equivalent to Schonflies symbols I-h, O-h, and T-d). Tangled Platonic {f,z} polyhedra-which cannot lie on the sphere without edge-crossings-are constructed as windings of helices with three, five, seven,... strands on multigenus surfaces formed by tubifying the edges of conventional Platonic polyhedra, have (chiral) symmetries 2fz (I, O, and T), whose vertices, edges, and faces are symmetrically identical, realized with two flags. The analysis extends to the "theta(z)" polyhedra, {2,z}. The vertices of these symmetric tangled polyhedra overlap with those of the Platonic polyhedra; however, their helicity requires curvilinear (or kinked) edges in all but one case. We show that these 2fz polyhedral tangles are maximally symmetric; more symmetric embeddings are necessarily untangled. On one hand, their topologies are very constrained: They are either self-entangled graphs (analogous to knots) or mutually catenated entangled compound polyhedra (analogous to links). On the other hand, an endless variety of entanglements can be realized for each topology. Simpler examples resemble patterns observed in synthetic organometallic materials and clathrin coats in vivo.
Subdividing space through interfaces leads to many space partitions that are relevant to soft matter self-assembly. Prominent examples include cellular media, e.g. soap froths, which are bubbles of air separated by interfaces of soap and water, but also more complex partitions such as bicontinuous minimal surfaces.
Using computer simulations, this thesis analyses soft matter systems in terms of the relationship between the physical forces between the system's constituents and the structure of the resulting interfaces or partitions. The focus is on two systems, copolymeric self-assembly and the so-called Quantizer problem, where the driving force of structure formation, the minimisation of the free-energy, is an interplay of surface area minimisation and stretching contributions, favouring cells of uniform thickness.
In the first part of the thesis we address copolymeric phase formation with sharp interfaces. We analyse a columnar copolymer system "forced" to assemble on a spherical surface, where the perfect solution, the hexagonal tiling, is topologically prohibited. For a system of three-armed copolymers, the resulting structure is described by solutions of the so-called Thomson problem, the search of minimal energy configurations of repelling charges on a sphere. We find three intertwined Thomson problem solutions on a single sphere, occurring at a probability depending on the radius of the substrate.
We then investigate the formation of amorphous and crystalline structures in the Quantizer system, a particulate model with an energy functional without surface tension that favours spherical cells of equal size. We find that quasi-static equilibrium cooling allows the Quantizer system to crystallise into a BCC ground state, whereas quenching and non-equilibrium cooling, i.e. cooling at slower rates then quenching, leads to an approximately hyperuniform, amorphous state. The assumed universality of the latter, i.e. independence of energy minimisation method or initial configuration, is strengthened by our results. We expand the Quantizer system by introducing interface tension, creating a model that we find to mimic polymeric micelle systems: An order-disorder phase transition is observed with a stable Frank-Caspar phase.
The second part considers bicontinuous partitions of space into two network-like domains, and introduces an open-source tool for the identification of structures in electron microscopy images. We expand a method of matching experimentally accessible projections with computed projections of potential structures, introduced by Deng and Mieczkowski (1998). The computed structures are modelled using nodal representations of constant-mean-curvature surfaces. A case study conducted on etioplast cell membranes in chloroplast precursors establishes the double Diamond surface structure to be dominant in these plant cells. We automate the matching process employing deep-learning methods, which manage to identify structures with excellent accuracy.
The echo chamber model describes the development of groups in heterogeneous social networks. By heterogeneous social network we mean a set of individuals, each of whom represents exactly one opinion. The existing relationships between individuals can then be represented by a graph. The echo chamber model is a time-discrete model which, like a board game, is played in rounds. In each round, an existing relationship is randomly and uniformly selected from the network and the two connected individuals interact. If the opinions of the individuals involved are sufficiently similar, they continue to move closer together in their opinions, whereas in the case of opinions that are too far apart, they break off their relationship and one of the individuals seeks a new relationship. In this paper we examine the building blocks of this model. We start from the observation that changes in the structure of relationships in the network can be described by a system of interacting particles in a more abstract space.
These reflections lead to the definition of a new abstract graph that encompasses all possible relational configurations of the social network. This provides us with the geometric understanding necessary to analyse the dynamic components of the echo chamber model in Part III. As a first step, in Part 7, we leave aside the opinions of the inidividuals and assume that the position of the edges changes with each move as described above, in order to obtain a basic understanding of the underlying dynamics. Using Markov chain theory, we find upper bounds on the speed of convergence of an associated Markov chain to its unique stationary distribution and show that there are mutually identifiable networks that are not apparent in the dynamics under analysis, in the sense that the stationary distribution of the associated Markov chain gives equal weight to these networks.
In the reversible cases, we focus in particular on the explicit form of the stationary distribution as well as on the lower bounds of the Cheeger constant to describe the convergence speed.
The final result of Section 8, based on absorbing Markov chains, shows that in a reduced version of the echo chamber model, a hierarchical structure of the number of conflicting relations can be identified.
We can use this structure to determine an upper bound on the expected absorption time, using a quasi-stationary distribution. This hierarchy of structure also provides a bridge to classical theories of pure death processes. We conclude by showing how future research can exploit this link and by discussing the importance of the results as building blocks for a full theoretical understanding of the echo chamber model. Finally, Part IV presents a published paper on the birth-death process with partial catastrophe. The paper is based on the explicit calculation of the first moment of a catastrophe. This first part is entirely based on an analytical approach to second degree recurrences with linear coefficients. The convergence to 0 of the resulting sequence as well as the speed of convergence are proved. On the other hand, the determination of the upper bounds of the expected value of the population size as well as its variance and the difference between the determined upper bound and the actual value of the expected value. For these results we use almost exclusively the theory of ordinary nonlinear differential equations.
The geomagnetic main field is vital for live on Earth, as it shields our habitat against the solar wind and cosmic rays. It is generated by the geodynamo in the Earth’s outer core and has a rich dynamic on various timescales. Global models of the field are used to study the interaction of the field and incoming charged particles, but also to infer core dynamics and to feed numerical simulations of the geodynamo. Modern satellite missions, such as the SWARM or the CHAMP mission, support high resolution reconstructions of the global field. From the 19 th century on, a global network of magnetic observatories has been established. It is growing ever since and global models can be constructed from the data it provides. Geomagnetic field models that extend further back in time rely on indirect observations of the field, i.e. thermoremanent records such as burnt clay or volcanic rocks and sediment records from lakes and seas. These indirect records come with (partially very large) uncertainties, introduced by the complex measurement methods and the dating procedure.
Focusing on thermoremanent records only, the aim of this thesis is the development of a new modeling strategy for the global geomagnetic field during the Holocene, which takes the uncertainties into account and produces realistic estimates of the reliability of the model. This aim is approached by first considering snapshot models, in order to address the irregular spatial distribution of the records and the non-linear relation of the indirect observations to the field itself. In a Bayesian setting, a modeling algorithm based on Gaussian process regression is developed and applied to binned data. The modeling algorithm is then extended to the temporal domain and expanded to incorporate dating uncertainties. Finally, the algorithm is sequentialized to deal with numerical challenges arising from the size of the Holocene dataset.
The central result of this thesis, including all of the aspects mentioned, is a new global geomagnetic field model. It covers the whole Holocene, back until 12000 BCE, and we call it ArchKalmag14k. When considering the uncertainties that are produced together with the model, it is evident that before 6000 BCE the thermoremanent database is not sufficient to support global models. For times more recent, ArchKalmag14k can be used to analyze features of the field under consideration of posterior uncertainties. The algorithm for generating ArchKalmag14k can be applied to different datasets and is provided to the community as an open source python package.
Symmetric, elegantly entangled structures are a curious mathematical construction that has found their way into the heart of the chemistry lab and the toolbox of constructive geometry. Of particular interest are those structures—knots, links and weavings—which are composed locally of simple twisted strands and are globally symmetric. This paper considers the symmetric tangling of multiple 2-periodic honeycomb networks. We do this using a constructive methodology borrowing elements of graph theory, low-dimensional topology and geometry. The result is a wide-ranging enumeration of symmetric tangled honeycomb networks, providing a foundation for their exploration in both the chemistry lab and the geometers toolbox.
The index theorem for elliptic operators on a closed Riemannian manifold by Atiyah and Singer has many applications in analysis, geometry and topology, but it is not suitable for a generalization to a Lorentzian setting.
In the case where a boundary is present Atiyah, Patodi and Singer provide an index theorem for compact Riemannian manifolds by introducing non-local boundary conditions obtained via the spectral decomposition of an induced boundary operator, so called APS boundary conditions. Bär and Strohmaier prove a Lorentzian version of this index theorem for the Dirac operator on a manifold with boundary by utilizing results from APS and the characterization of the spectral flow by Phillips. In their case the Lorentzian manifold is assumed to be globally hyperbolic and spatially compact, and the induced boundary operator is given by the Riemannian Dirac operator on a spacelike Cauchy hypersurface. Their results show that imposing APS boundary conditions for these boundary operator will yield a Fredholm operator with a smooth kernel and its index can be calculated by a formula similar to the Riemannian case.
Back in the Riemannian setting, Bär and Ballmann provide an analysis of the most general kind of boundary conditions that can be imposed on a first order elliptic differential operator that will still yield regularity for solutions as well as Fredholm property for the resulting operator. These boundary conditions can be thought of as deformations to the graph of a suitable operator mapping APS boundary conditions to their orthogonal complement.
This thesis aims at applying the boundary conditions found by Bär and Ballmann to a Lorentzian setting to understand more general types of boundary conditions for the Dirac operator, conserving Fredholm property as well as providing regularity results and relative index formulas for the resulting operators. As it turns out, there are some differences in applying these graph-type boundary conditions to the Lorentzian Dirac operator when compared to the Riemannian setting. It will be shown that in contrast to the Riemannian case, going from a Fredholm boundary condition to its orthogonal complement works out fine in the Lorentzian setting. On the other hand, in order to deduce Fredholm property and regularity of solutions for graph-type boundary conditions, additional assumptions for the deformation maps need to be made.
The thesis is organized as follows. In chapter 1 basic facts about Lorentzian and Riemannian spin manifolds, their spinor bundles and the Dirac operator are listed. These will serve as a foundation to define the setting and prove the results of later chapters.
Chapter 2 defines the general notion of boundary conditions for the Dirac operator used in this thesis and introduces the APS boundary conditions as well as their graph type deformations. Also the role of the wave evolution operator in finding Fredholm boundary conditions is analyzed and these boundary conditions are connected to notion of Fredholm pairs in a given Hilbert space.
Chapter 3 focuses on the principal symbol calculation of the wave evolution operator and the results are used to proof Fredholm property as well as regularity of solutions for suitable graph-type boundary conditions. Also sufficient conditions are derived for (pseudo-)local boundary conditions imposed on the Dirac operator to yield a Fredholm operator with a smooth solution space.
In the last chapter 4, a few examples of boundary conditions are calculated applying the results of previous chapters. Restricting to special geometries and/or boundary conditions, results can be obtained that are not covered by the more general statements, and it is shown that so-called transmission conditions behave very differently than in the Riemannian setting.
We study boundary value problems for first-order elliptic differential operators on manifolds with compact boundary. The adapted boundary operator need not be selfadjoint and the boundary condition need not be pseudo-local.We show the equivalence of various characterisations of elliptic boundary conditions and demonstrate how the boundary conditions traditionally considered in the literature fit in our framework. The regularity of the solutions up to the boundary is proven. We show that imposing elliptic boundary conditions yields a Fredholm operator if the manifold is compact. We provide examples which are conveniently treated by our methods.
Dynamical models make specific assumptions about cognitive processes that generate human behavior. In data assimilation, these models are tested against timeordered data. Recent progress on Bayesian data assimilation demonstrates that this approach combines the strengths of statistical modeling of individual differences with the those of dynamical cognitive models.
We discuss Neumann problems for self-adjoint Laplacians on (possibly infinite) graphs. Under the assumption that the heat semigroup is ultracontractive we discuss the unique solvability for non-empty subgraphs with respect to the vertex boundary and provide analytic and probabilistic representations for Neumann solutions. A second result deals with Neumann problems on canonically compactifiable graphs with respect to the Royden boundary and provides conditions for unique solvability and analytic and probabilistic representations.
We show that local deformations, near closed subsets, of solutions to open partial differential relations can be extended to global deformations, provided all but the highest derivatives stay constant along the subset. The applicability of this general result is illustrated by a number of examples, dealing with convex embeddings of hypersurfaces, differential forms, and lapse functions in Lorentzian geometry.
The main application is a general approximation result by sections that have very restrictive local properties on open dense subsets. This shows, for instance, that given any K is an element of Double-struck capital R every manifold of dimension at least 2 carries a complete C-1,C- 1-metric which, on a dense open subset, is smooth with constant sectional curvature K. Of course, this is impossible for C-2-metrics in general.
A rigorous construction of the supersymmetric path integral associated to a compact spin manifold
(2022)
We give a rigorous construction of the path integral in N = 1/2 supersymmetry as an integral map for differential forms on the loop space of a compact spin manifold. It is defined on the space of differential forms which can be represented by extended iterated integrals in the sense of Chen and Getzler-Jones-Petrack. Via the iterated integral map, we compare our path integral to the non-commutative loop space Chern character of Guneysu and the second author. Our theory provides a rigorous background to various formal proofs of the Atiyah-Singer index theorem for twisted Dirac operators using supersymmetric path integrals, as investigated by Alvarez-Gaume, Atiyah, Bismut and Witten.
We propose a global geomagnetic field model for the last 14 thousand years, based on thermoremanent records. We call the model ArchKalmag14k. ArchKalmag14k is constructed by modifying recently proposed algorithms, based on space-time correlations. Due to the amount of data and complexity of the model, the full Bayesian posterior is numerically intractable. To tackle this, we sequentialize the inversion by implementing a Kalman-filter with a fixed time step. Every step consists of a prediction, based on a degree dependent temporal covariance, and a correction via Gaussian process regression. Dating errors are treated via a noisy input formulation. Cross correlations are reintroduced by a smoothing algorithm and model parameters are inferred from the data. Due to the specific statistical nature of the proposed algorithms, the model comes with space and time-dependent uncertainty estimates. The new model ArchKalmag14k shows less variation in the large-scale degrees than comparable models. Local predictions represent the underlying data and agree with comparable models, if the location is sampled well. Uncertainties are bigger for earlier times and in regions of sparse data coverage. We also use ArchKalmag14k to analyze the appearance and evolution of the South Atlantic anomaly together with reverse flux patches at the core-mantle boundary, considering the model uncertainties. While we find good agreement with earlier models for recent times, our model suggests a different evolution of intensity minima prior to 1650 CE. In general, our results suggest that prior to 6000 BCE the data is not sufficient to support global models.
We introduce the class of "smooth rough paths" and study their main properties. Working in a smooth setting allows us to discard sewing arguments and focus on algebraic and geometric aspects. Specifically, a Maurer-Cartan perspective is the key to a purely algebraic form of Lyons' extension theorem, the renormalization of rough paths following up on [Bruned et al.: A rough path perspective on renormalization, J. Funct. Anal. 277(11), 2019], as well as a related notion of "sum of rough paths". We first develop our ideas in a geometric rough path setting, as this best resonates with recent works on signature varieties, as well as with the renormalization of geometric rough paths. We then explore extensions to the quasi-geometric and the more general Hopf algebraic setting.
Randomised one-step time integration methods for deterministic operator differential equations
(2022)
Uncertainty quantification plays an important role in problems that involve inferring a parameter of an initial value problem from observations of the solution. Conrad et al. (Stat Comput 27(4):1065-1082, 2017) proposed randomisation of deterministic time integration methods as a strategy for quantifying uncertainty due to the unknown time discretisation error. We consider this strategy for systems that are described by deterministic, possibly time-dependent operator differential equations defined on a Banach space or a Gelfand triple. Our main results are strong error bounds on the random trajectories measured in Orlicz norms, proven under a weaker assumption on the local truncation error of the underlying deterministic time integration method. Our analysis establishes the theoretical validity of randomised time integration for differential equations in infinite-dimensional settings.
Variational bayesian inference for nonlinear hawkes process with gaussian process self-effects
(2022)
Traditionally, Hawkes processes are used to model time-continuous point processes with history dependence. Here, we propose an extended model where the self-effects are of both excitatory and inhibitory types and follow a Gaussian Process. Whereas previous work either relies on a less flexible parameterization of the model, or requires a large amount of data, our formulation allows for both a flexible model and learning when data are scarce. We continue the line of work of Bayesian inference for Hawkes processes, and derive an inference algorithm by performing inference on an aggregated sum of Gaussian Processes. Approximate Bayesian inference is achieved via data augmentation, and we describe a mean-field variational inference approach to learn the model parameters. To demonstrate the flexibility of the model we apply our methodology on data from different domains and compare it to previously reported results.
Model uncertainty quantification is an essential component of effective data assimilation. Model errors associated with sub-grid scale processes are often represented through stochastic parameterizations of the unresolved process. Many existing Stochastic Parameterization schemes are only applicable when knowledge of the true sub-grid scale process or full observations of the coarse scale process are available, which is typically not the case in real applications. We present a methodology for estimating the statistics of sub-grid scale processes for the more realistic case that only partial observations of the coarse scale process are available. Model error realizations are estimated over a training period by minimizing their conditional sum of squared deviations given some informative covariates (e.g., state of the system), constrained by available observations and assuming that the observation errors are smaller than the model errors. From these realizations a conditional probability distribution of additive model errors given these covariates is obtained, allowing for complex non-Gaussian error structures. Random draws from this density are then used in actual ensemble data assimilation experiments. We demonstrate the efficacy of the approach through numerical experiments with the multi-scale Lorenz 96 system using both small and large time scale separations between slow (coarse scale) and fast (fine scale) variables. The resulting error estimates and forecasts obtained with this new method are superior to those from two existing methods.
We present a technique for the enumeration of all isotopically distinct ways of tiling a hyperbolic surface of finite genus, possibly nonorientable and with punctures and boundary. This generalizes the enumeration using Delaney--Dress combinatorial tiling theory of combinatorial classes of tilings to isotopy classes of tilings. To accomplish this, we derive an action of the mapping class group of the orbifold associated to the symmetry group of a tiling on the set of tilings. We explicitly give descriptions and presentations of semipure mapping class groups and of tilings as decorations on orbifolds. We apply this enumerative result to generate an array of isotopically distinct tilings of the hyperbolic plane with symmetries generated by rotations that are commensurate with the threedimensional symmetries of the primitive, diamond, and gyroid triply periodic minimal surfaces, which have relevance to a variety of physical systems.
The motivation for this work was the question of reliability and robustness of seismic tomography. The problem is that many earth models exist which can describe the underlying ground motion records equally well. Most algorithms for reconstructing earth models provide a solution, but rarely quantify their variability. If there is no way to verify the imaged structures, an interpretation is hardly reliable. The initial idea was to explore the space of equivalent earth models using Bayesian inference. However, it quickly became apparent that the rigorous quantification of tomographic uncertainties could not be accomplished within the scope of a dissertation.
In order to maintain the fundamental concept of statistical inference, less complex problems from the geosciences are treated instead. This dissertation aims to anchor Bayesian inference more deeply in the geosciences and to transfer knowledge from applied mathematics. The underlying idea is to use well-known methods and techniques from statistics to quantify the uncertainties of inverse problems in the geosciences. This work is divided into three parts:
Part I introduces the necessary mathematics and should be understood as a kind of toolbox. With a physical application in mind, this section provides a compact summary of all methods and techniques used. The introduction of Bayesian inference makes the beginning. Then, as a special case, the focus is on regression with Gaussian processes under linear transformations. The chapters on the derivation of covariance functions and the approximation of non-linearities are discussed in more detail.
Part II presents two proof of concept studies in the field of seismology. The aim is to present the conceptual application of the introduced methods and techniques with moderate complexity. The example about traveltime tomography applies the approximation of non-linear relationships. The derivation of a covariance function using the wave equation is shown in the example of a damped vibrating string. With these two synthetic applications, a consistent concept for the quantification of modeling uncertainties has been developed.
Part III presents the reconstruction of the Earth's archeomagnetic field. This application uses the whole toolbox presented in Part I and is correspondingly complex. The modeling of the past 1000 years is based on real data and reliably quantifies the spatial modeling uncertainties. The statistical model presented is widely used and is under active development.
The three applications mentioned are intentionally kept flexible to allow transferability to similar problems. The entire work focuses on the non-uniqueness of inverse problems in the geosciences. It is intended to be of relevance to those interested in the concepts of Bayesian inference.
In this work, we present Raman lidar data (from a Nd:YAG operating at 355 nm, 532 nm and 1064 nm) from the international research village Ny-Alesund for the time period of January to April 2020 during the Arctic haze season of the MOSAiC winter. We present values of the aerosol backscatter, the lidar ratio and the backscatter Angstrom exponent, though the latter depends on wavelength. The aerosol polarization was generally below 2%, indicating mostly spherical particles. We observed that events with high backscatter and high lidar ratio did not coincide. In fact, the highest lidar ratios (LR > 75 sr at 532 nm) were already found by January and may have been caused by hygroscopic growth, rather than by advection of more continental aerosol. Further, we performed an inversion of the lidar data to retrieve a refractive index and a size distribution of the aerosol. Our results suggest that in the free troposphere (above approximate to 2500 m) the aerosol size distribution is quite constant in time, with dominance of small particles with a modal radius well below 100 nm. On the contrary, below approximate to 2000 m in altitude, we frequently found gradients in aerosol backscatter and even size distribution, sometimes in accordance with gradients of wind speed, humidity or elevated temperature inversions, as if the aerosol was strongly modified by vertical displacement in what we call the "mechanical boundary layer". Finally, we present an indication that additional meteorological soundings during MOSAiC campaign did not necessarily improve the fidelity of air backtrajectories.
The Levenberg–Marquardt regularization for the backward heat equation with fractional derivative
(2022)
The backward heat problem with time-fractional derivative in Caputo's sense is studied. The inverse problem is severely ill-posed in the case when the fractional order is close to unity. A Levenberg-Marquardt method with a new a posteriori stopping rule is investigated. We show that optimal order can be obtained for the proposed method under a Hölder-type source condition. Numerical examples for one and two dimensions are provided.
As the loop space of a Riemannian manifold is infinite-dimensional, it is a non-trivial problem to make sense of the "top degree component " of a differential form on it.
In this paper, we show that a formula from finite dimensions generalizes to assign a sensible "top degree component " to certain composite forms, obtained by wedging with the exponential (in the exterior algebra) of the canonical presymplectic 2-form on the loop space.
This construction is a crucial ingredient for the definition of the supersymmetric path integral on the loop space.
In this paper, we define a variant of Roe algebras for spaces with cylindrical ends and use this to study questions regarding existence and classification of metrics of positive scalar curvature on such manifolds which are collared on the cylindrical end.
We discuss how our constructions are related to relative higher index theory as developed by Chang, Weinberger, and Yu and use this relationship to define higher rho-invariants for positive scalar curvature metrics on manifolds with boundary.
This paves the way for the classification of these metrics.
Finally, we use the machinery developed here to give a concise proof of a result of Schick and the author, which relates the relative higher index with indices defined in the presence of positive scalar curvature on the boundary.
Hidden semi-Markov models generalise hidden Markov models by explicitly modelling the time spent in a given state, the so-called dwell time, using some distribution defined on the natural numbers. While the (shifted) Poisson and negative binomial distribution provide natural choices for such distributions, in practice, parametric distributions can lack the flexibility to adequately model the dwell times. To overcome this problem, a penalised maximum likelihood approach is proposed that allows for a flexible and data-driven estimation of the dwell-time distributions without the need to make any distributional assumption. This approach is suitable for direct modelling purposes or as an exploratory tool to investigate the latent state dynamics. The feasibility and potential of the suggested approach is illustrated in a simulation study and by modelling muskox movements in northeast Greenland using GPS tracking data. The proposed method is implemented in the R-package PHSMM which is available on CRAN.
Ground motion with strong-velocity pulses can cause significant damage to buildings and structures at certain periods; hence, knowing the period and velocity amplitude of such pulses is critical for earthquake structural engineering.
However, the physical factors relating the scaling of pulse periods with magnitude are poorly understood.
In this study, we investigate moderate but damaging earthquakes (M-w 6-7) and characterize ground- motion pulses using the method of Shahi and Baker (2014) while considering the potential static-offset effects.
We confirm that the within-event variability of the pulses is large. The identified pulses in this study are mostly from strike-slip-like earthquakes. We further perform simulations using the freq uency-wavenumber algorithm to investigate the causes of the variability of the pulse periods within and between events for moderate strike-slip earthquakes.
We test the effect of fault dips, and the impact of the asperity locations and sizes. The simulations reveal that the asperity properties have a high impact on the pulse periods and amplitudes at nearby stations.
Our results emphasize the importance of asperity characteristics, in addition to earthquake magnitudes for the occurrence and properties of pulses produced by the forward directivity effect.
We finally quantify and discuss within- and between-event variabilities of pulse properties at short distances.
We consider Bayesian inference for large-scale inverse problems, where computational challenges arise from the need for repeated evaluations of an expensive forward model.
This renders most Markov chain Monte Carlo approaches infeasible, since they typically require O(10(4)) model runs, or more.
Moreover, the forward model is often given as a black box or is impractical to differentiate.
Therefore derivative-free algorithms are highly desirable. We propose a framework, which is built on Kalman methodology, to efficiently perform Bayesian inference in such inverse problems.
The basic method is based on an approximation of the filtering distribution of a novel mean-field dynamical system, into which the inverse problem is embedded as an observation operator.
Theoretical properties are established for linear inverse problems, demonstrating that the desired Bayesian posterior is given by the steady state of the law of the filtering distribution of the mean-field dynamical system, and proving exponential convergence to it.
This suggests that, for nonlinear problems which are close to Gaussian, sequentially computing this law provides the basis for efficient iterative methods to approximate the Bayesian posterior.
Ensemble methods are applied to obtain interacting particle system approximations of the filtering distribution of the mean-field model; and practical strategies to further reduce the computational and memory cost of the methodology are presented, including low-rank approximation and a bi-fidelity approach.
The effectiveness of the framework is demonstrated in several numerical experiments, including proof-of-concept linear/nonlinear examples and two large-scale applications: learning of permeability parameters in subsurface flow; and learning subgrid-scale parameters in a global climate model.
Moreover, the stochastic ensemble Kalman filter and various ensemble square-root Kalman filters are all employed and are compared numerically.
The results demonstrate that the proposed method, based on exponential convergence to the filtering distribution of a mean-field dynamical system, is competitive with pre-existing Kalman-based methods for inverse problems.
We consider the initial value problem for the Navier-Stokes equations over R-3 x [0, T] with time T > 0 in the spatially periodic setting.
We prove that it induces open injective mappings A(s): B-1(s) -> B-2(s-1) where B-1(s), B-2(s-1) are elements from scales of specially constructed function spaces of Bochner-Sobolev typeparametrized with the smoothness index s is an element of N.
Finally, we prove that a map Asis surjective if and only if the inverse image A(s)(- 1) (K) of any pre compact set K from the range of the map Asis bounded in the Bochner space L-s([0, T], L-r(T-3))with the Ladyzhenskaya-Prodi-Serrin numbers s, r.