Refine
Year of publication
Document Type
- Article (1059)
- Monograph/Edited Volume (427)
- Preprint (378)
- Doctoral Thesis (150)
- Other (46)
- Postprint (31)
- Review (16)
- Conference Proceeding (8)
- Master's Thesis (7)
- Part of a Book (3)
Language
- English (1852)
- German (265)
- French (7)
- Italian (3)
- Multiple languages (1)
Keywords
- random point processes (19)
- statistical mechanics (19)
- stochastic analysis (19)
- index (14)
- Fredholm property (12)
- boundary value problems (12)
- cluster expansion (10)
- data assimilation (10)
- regularization (10)
- elliptic operators (9)
Institute
- Institut für Mathematik (2128) (remove)
Das Eigene und das Fremde
(2023)
Die vorliegende Arbeit stellt eine Untersuchung des Fremdverstehens von Lehrkräften im Mathematikunterricht dar. Mit ‚Fremdverstehen‘ soll dabei – in Anlehnung an den Soziologen Alfred Schütz – der Prozess bezeichnet werden, in welchem eine Lehrkraft versucht, das Verhalten einer Schülerin oder eines Schülers zu verstehen, indem sie dieses Verhalten auf ein Erleben zurückführt, das ihm zugrunde gelegen haben könnte. Als ein wesentliches Merkmal des Prozesses stellt Schütz in seiner Theorie des Fremdverstehens heraus, dass das Fremdverstehen eines Menschen immer auch auf seinen eigenen Erlebnissen basiert. Aus diesem Grund wird in der Arbeit ein methodischer Zweischritt vorgenommen: Es werden zunächst die mathematikbezogenen Erlebnisse zweier Lehrkräfte nachgezeichnet, bevor dann ihr Fremdverstehen in konkreten Situationen im Mathematikunterricht rekonstruiert wird. In der ersten Teiluntersuchung (= der Rekonstruktion eigener Erlebnisse der untersuchten Lehrkräfte) erfolgt die Datenerhebung mit Hilfe biographisch-narrativer Interviews, in denen die untersuchten Lehrkräfte angeregt werden, ihre mathematikbezogene Lebensgeschichte zu erzählen. Die Analyse dieser Interviews wird im Sinne der rekonstruktiven Fallanalyse vorgenommen. Insgesamt führt die erste Teiluntersuchung zu textlichen Darstellungen der rekonstruierten mathematikbezogenen Lebensgeschichte der untersuchten Mathematiklehrkräfte. In der zweiten Teiluntersuchung (= der Rekonstruktion des Fremdverstehens der untersuchten Lehrkräfte) werden dann narrative Interviews geführt, in denen die untersuchten Lehrkräfte von ihrem Fremdverstehen in konkreten Situationen im Mathematikunterricht erzählen. Die Analyse dieser Interviews erfolgt mit Hilfe eines dreischrittigen Analyseverfahrens, welches die Autorin eigens zum Zweck der Rekonstruktion von Fremdverstehen entwickelte. Am Ende dieser zweiten Teiluntersuchung werden sowohl das rekonstruierte Fremdverstehen der Lehrkräfte in verschiedenen Unterrichtssituationen dargestellt als auch Strukturen, die sich in ihrem Fremdverstehen abzeichnen. Mit Hilfe einer theoretischen Verallgemeinerung werden schließlich – auf Basis der Ergebnisse der zweiten Teiluntersuchung – Aussagen über fünf Merkmale des Fremdverstehens von Lehrkräften im Mathematikunterricht im Allgemeinen gewonnen. Mit diesen Aussagen vermag die Arbeit eine erste Beschreibung davon hervorzubringen, wie sich das Phänomen des Fremdverstehens von Lehrkräften im Mathematikunterricht ausgestalten kann.
Zahlen in den Fingern
(2023)
Die Debatte über den Einsatz von digitalen Werkzeugen in der mathematischen Frühförderung ist hoch aktuell. Lernspiele werden konstruiert, mit dem Ziel, mathematisches, informelles Wissen aufzubauen und so einen besseren Schulstart zu ermöglichen. Doch allein die digitale und spielerische Aufarbeitung führt nicht zwingend zu einem Lernerfolg. Daher ist es umso wichtiger, die konkrete Implementation der theoretischen Konstrukte und Interaktionsmöglichkeiten mit den Werkzeugen zu analysieren und passend aufzubereiten.
In dieser Masterarbeit wird dazu exemplarisch ein mathematisches Lernspiel namens „Fingu“ für den Einsatz im vorschulischen Bereich theoretisch und empirisch im Rahmen der Artifact-Centric Activity Theory (ACAT) untersucht. Dazu werden zunächst die theoretischen Hintergründe zum Zahlensinn, Zahlbegriffserwerb, Teil-Ganze-Verständnis, der Anzahlwahrnehmung und -bestimmung, den Anzahlvergleichen und der Anzahldarstellung mithilfe von Fingern gemäß der Embodied Cognition sowie der Verwendung von digitalen Werkzeugen und Multi-Touch-Geräten umfassend beschrieben. Anschließend wird die App Fingu erklärt und dann theoretisch entlang des ACAT-Review-Guides analysiert. Zuletzt wird die selbstständig durchgeführte Studie mit zehn Vorschulkindern erläutert und darauf aufbauend Verbesserungs- und Entwicklungsmöglichkeiten der App auf wissenschaftlicher Grundlage beigetragen. Für Fingu lässt sich abschließend festhalten, dass viele Prozesse wie die (Quasi-)Simultanerfassung oder das Zählen gefördert werden können, für andere wie das Teil-Ganze-Verständnis aber noch Anpassungen und/oder die Begleitung durch Erwachsene nötig ist.
Non-local boundary conditions for the spin Dirac operator on spacetimes with timelike boundary
(2023)
Non-local boundary conditions – for example the Atiyah–Patodi–Singer (APS) conditions – for Dirac operators on Riemannian manifolds are rather well-understood, while not much is known for such operators on Lorentzian manifolds. Recently, Bär and Strohmaier [15] and Drago, Große, and Murro [27] introduced APS-like conditions for the spin Dirac operator on Lorentzian manifolds with spacelike and timelike boundary, respectively. While Bär and Strohmaier [15] showed the Fredholmness of the Dirac operator with these boundary conditions, Drago, Große, and Murro [27] proved the well-posedness of the corresponding initial boundary value problem under certain geometric assumptions.
In this thesis, we will follow the footsteps of the latter authors and discuss whether the APS-like conditions for Dirac operators on Lorentzian manifolds with timelike boundary can be replaced by more general conditions such that the associated initial boundary value problems are still wellposed.
We consider boundary conditions that are local in time and non-local in the spatial directions. More precisely, we use the spacetime foliation arising from the Cauchy temporal function and split the Dirac operator along this foliation. This gives rise to a family of elliptic operators each acting on spinors of the spin bundle over the corresponding timeslice. The theory of elliptic operators then ensures that we can find families of non-local boundary conditions with respect to this family of operators. Proceeding, we use such a family of boundary conditions to define a Lorentzian boundary condition on the whole timelike boundary. By analyzing the properties of the Lorentzian boundary conditions, we then find sufficient conditions on the family of non-local boundary conditions that lead to the well-posedness of the corresponding Cauchy problems. The well-posedness itself will then be proven by using classical tools including energy estimates and approximation by solutions of the regularized problems.
Moreover, we use this theory to construct explicit boundary conditions for the Lorentzian Dirac operator. More precisely, we will discuss two examples of boundary conditions – the analogue of the Atiyah–Patodi–Singer and the chirality conditions, respectively, in our setting. For doing this, we will have a closer look at the theory of non-local boundary conditions for elliptic operators and analyze the requirements on the family of non-local boundary conditions for these specific examples.
This thesis bridges two areas of mathematics, algebra on the one hand with the Milnor-Moore theorem (also called Cartier-Quillen-Milnor-Moore theorem) as well as the Poincaré-Birkhoff-Witt theorem, and analysis on the other hand with Shintani zeta functions which generalise multiple zeta functions.
The first part is devoted to an algebraic formulation of the locality principle in physics and generalisations of classification theorems such as Milnor-Moore and Poincaré-Birkhoff-Witt theorems to the locality framework. The locality principle roughly says that events that take place far apart in spacetime do not infuence each other. The algebraic formulation of this principle discussed here is useful when analysing singularities which arise from events located far apart in space, in order to renormalise them while keeping a memory of the fact that they do not influence each other. We start by endowing a vector space with a symmetric relation, named the locality relation, which keeps track of elements that are "locally independent". The pair of a vector space together with such relation is called a pre-locality vector space. This concept is extended to tensor products allowing only tensors made of locally independent elements. We extend this concept to the locality tensor algebra, and locality symmetric algebra of a pre-locality vector space and prove the universal properties of each of such structures. We also introduce the pre-locality Lie algebras, together with their associated locality universal enveloping algebras and prove their universal property. We later upgrade all such structures and results from the pre-locality to the locality context, requiring the locality relation to be compatible with the linear structure of the vector space. This allows us to define locality coalgebras, locality bialgebras, and locality Hopf algebras. Finally, all the previous results are used to prove the locality version of the Milnor-Moore and the Poincaré-Birkhoff-Witt theorems. It is worth noticing that the proofs presented, not only generalise the results in the usual (non-locality) setup, but also often use less tools than their counterparts in their non-locality counterparts.
The second part is devoted to study the polar structure of the Shintani zeta functions. Such functions, which generalise the Riemman zeta function, multiple zeta functions, Mordell-Tornheim zeta functions, among others, are parametrised by matrices with real non-negative arguments. It is known that Shintani zeta functions extend to meromorphic functions with poles on afine hyperplanes. We refine this result in showing that the poles lie on hyperplanes parallel to the facets of certain convex polyhedra associated to the defining matrix for the Shintani zeta function. Explicitly, the latter are the Newton polytopes of the polynomials induced by the columns of the underlying matrix. We then prove that the coeficients of the equation which describes the hyperplanes in the canonical basis are either zero or one, similar to the poles arising when renormalising generic Feynman amplitudes. For that purpose, we introduce an algorithm to distribute weight over a graph such that the weight at each vertex satisfies a given lower bound.
Point processes are a common methodology to model sets of events. From earthquakes to social media posts, from the arrival times of neuronal spikes to the timing of crimes, from stock prices to disease spreading -- these phenomena can be reduced to the occurrences of events concentrated in points. Often, these events happen one after the other defining a time--series.
Models of point processes can be used to deepen our understanding of such events and for classification and prediction. Such models include an underlying random process that generates the events. This work uses Bayesian methodology to infer the underlying generative process from observed data. Our contribution is twofold -- we develop new models and new inference methods for these processes.
We propose a model that extends the family of point processes where the occurrence of an event depends on the previous events. This family is known as Hawkes processes. Whereas in most existing models of such processes, past events are assumed to have only an excitatory effect on future events, we focus on the newly developed nonlinear Hawkes process, where past events could have excitatory and inhibitory effects. After defining the model, we present its inference method and apply it to data from different fields, among others, to neuronal activity.
The second model described in the thesis concerns a specific instance of point processes --- the decision process underlying human gaze control. This process results in a series of fixated locations in an image. We developed a new model to describe this process, motivated by the known Exploration--Exploitation dilemma. Alongside the model, we present a Bayesian inference algorithm to infer the model parameters.
Remaining in the realm of human scene viewing, we identify the lack of best practices for Bayesian inference in this field. We survey four popular algorithms and compare their performances for parameter inference in two scan path models.
The novel models and inference algorithms presented in this dissertation enrich the understanding of point process data and allow us to uncover meaningful insights.
Amoeboid cell motility takes place in a variety of biomedical processes such as cancer metastasis, embryonic morphogenesis, and wound healing. In contrast to other forms of cell motility, it is mainly driven by substantial cell shape changes. Based on the interplay of explorative membrane protrusions at the front and a slower-acting membrane retraction at the rear, the cell moves in a crawling kind of way. Underlying these protrusions and retractions are multiple physiological processes resulting in changes of the cytoskeleton, a meshwork of different multi-functional proteins. The complexity and versatility of amoeboid cell motility raise the need for novel computational models based on a profound theoretical framework to analyze and simulate the dynamics of the cell shape.
The objective of this thesis is the development of (i) a mathematical framework to describe contour dynamics in time and space, (ii) a computational model to infer expansion and retraction characteristics of individual cell tracks and to produce realistic contour dynamics, (iii) and a complementing Open Science approach to make the above methods fully accessible and easy to use.
In this work, we mainly used single-cell recordings of the model organism Dictyostelium discoideum. Based on stacks of segmented microscopy images, we apply a Bayesian approach to obtain smooth representations of the cell membrane, so-called cell contours. We introduce a one-parameter family of regularized contour flows to track reference points on the contour (virtual markers) in time and space. This way, we define a coordinate system to visualize local geometric and dynamic quantities of individual contour dynamics in so-called kymograph plots. In particular, we introduce the local marker dispersion as a measure to identify membrane protrusions and retractions in a fully automated way.
This mathematical framework is the basis of a novel contour dynamics model, which consists of three biophysiologically motivated components: one stochastic term, accounting for membrane protrusions, and two deterministic terms to control the shape and area of the contour, which account for membrane retractions. Our model provides a fully automated approach to infer protrusion and retraction characteristics from experimental cell tracks while being also capable of simulating realistic and qualitatively different contour dynamics. Furthermore, the model is used to classify two different locomotion types: the amoeboid and a so-called fan-shaped type.
With the complementing Open Science approach, we ensure a high standard regarding the usability of our methods and the reproducibility of our research. In this context, we introduce our software publication named AmoePy, an open-source Python package to segment, analyze, and simulate amoeboid cell motility. Furthermore, we describe measures to improve its usability and extensibility, e.g., by detailed run instructions and an automatically generated source code documentation, and to ensure its functionality and stability, e.g., by automatic software tests, data validation, and a hierarchical package structure.
The mathematical approaches of this work provide substantial improvements regarding the modeling and analysis of amoeboid cell motility. We deem the above methods, due to their generalized nature, to be of greater value for other scientific applications, e.g., varying organisms and experimental setups or the transition from unicellular to multicellular movement. Furthermore, we enable other researchers from different fields, i.e., mathematics, biophysics, and medicine, to apply our mathematical methods. By following Open Science standards, this work is of greater value for the cell migration community and a potential role model for other Open Science contributions.
Das Mathematik-Teilprojekt SPIES-M zielt auf eine stärkere Professionsorientierung und die Verknüpfung von Fachwissenschaft und Fachdidaktik in der universitären Lehrkräftebildung. Zu allen großen Inhaltsgebieten der Mathematik wurden neue Lehrveranstaltungen konzipiert und in den Studienordnungen sämtlicher Lehrämter Mathematik an der Universität Potsdam implementiert. Für die Konzeption wurden theoriebasiert Gestaltungsprinzipien herausgearbeitet, die sowohl für das Design als auch für die Evaluation und Weiterentwicklung der Lehrveranstaltungen nach dem Design-Research-Ansatz genutzt werden können. Die Umsetzung der Gestaltungsprinzipien wird am Beispiel der Fundamentalen Idee der Proportionalität verdeutlicht und dabei aufgezeigt, wie Studierende dazu befähigt werden können, fachdidaktisches Wissen aus fachmathematischen Inhalten zu generieren. Die Entwicklung des Professionswissens der Studierenden wird mithilfe unterschiedlicher Instrumente untersucht, um Rückschlüsse auf die Wirksamkeit der neu konzipierten Lehrveranstaltungen zu ziehen. Für die Untersuchungen im Mixed-Methods-Design werden neben Beobachtungen in Lehrveranstaltungen eigens konzipierte Wissenstests, Gruppeninterviews, Unterrichtsentwürfe aus Praxisphasen und Lerntagebücher genutzt. Die Studierendenperspektive wird durch Befragungen zur wahrgenommenen (Berufs-)Relevanz der Lehrveranstaltungen erhoben. Weiteres wesentliches Element der Begleitforschung ist die kollegiale Supervision durch sogenannte „Spies“ (Spione), die die Veranstaltungen kriteriengeleitet beobachten und anschließend gemeinsam mit den Dozierenden reflektieren. Die bisherigen Ergebnisse werden hier präsentiert und hinsichtlich ihrer Implikationen diskutiert. Die im Projekt entwickelten Gestaltungsprinzipien als Werkzeug für Design und Evaluation sowie das Spies-Konzept der kollegialen Supervision werden für die Qualitätsentwicklung von Lehrveranstaltungen zum Transfer vorgeschlagen.
Übungsbuch zur Stochastik
(2023)
Dieses Buch stellt Übungen zu den Grundbegriffen und Grundsätzen der Stochastik und ihre Lösungen zur Verfügung. So wie man Tonleitern in der Musik trainiert, so berechnet man Übungsaufgaben in der Mathematik. In diesem Sinne soll dieses Übungsbuch vor allem als Vorlage dienen für das eigenständige, eigenverantwortliche Lernen und Üben.
Die Schönheit und Einzigartigkeit der Wahrscheinlichkeitstheorie besteht darin, dass sie eine Vielzahl von realen Phänomenen modellieren kann. Daher findet man hier Aufgaben mit Verbindungen zur Geometrie, zu Glücksspielen, zur Versicherungsmathematik, zur Demographie und vielen anderen Themen.
According to Radzikowski’s celebrated results, bisolutions of a wave operator on a globally hyperbolic spacetime are of the Hadamard form iff they are given by a linear combination of distinguished parametrices i2(G˜aF−G˜F+G˜A−G˜R) in the sense of Duistermaat and Hörmander [Acta Math. 128, 183–269 (1972)] and Radzikowski [Commun. Math. Phys. 179, 529 (1996)]. Inspired by the construction of the corresponding advanced and retarded Green operator GA, GR as done by Bär, Ginoux, and Pfäffle {Wave Equations on Lorentzian Manifolds and Quantization [European Mathematical Society (EMS), Zürich, 2007]}, we construct the remaining two Green operators GF, GaF locally in terms of Hadamard series. Afterward, we provide the global construction of i2(G˜aF−G˜F), which relies on new techniques such as a well-posed Cauchy problem for bisolutions and a patching argument using Čech cohomology. This leads to global bisolutions of the Hadamard form, each of which can be chosen to be a Hadamard two-point-function, i.e., the smooth part can be adapted such that, additionally, the symmetry and the positivity condition are exactly satisfied.
Extreme value statistics is a popular and frequently used tool to model the occurrence of large earthquakes. The problem of poor statistics arising from rare events is addressed by taking advantage of the validity of general statistical properties in asymptotic regimes. In this note, I argue that the use of extreme value statistics for the purpose of practically modeling the tail of the frequency-magnitude distribution of earthquakes can produce biased and thus misleading results because it is unknown to what degree the tail of the true distribution is sampled by data. Using synthetic data allows to quantify this bias in detail. The implicit assumption that the true M-max is close to the maximum observed magnitude M-max,M-observed restricts the class of the potential models a priori to those with M-max = M-max,M-observed + Delta M with an increment Delta M approximate to 0.5... 1.2. This corresponds to the simple heuristic method suggested by Wheeler (2009) and labeled :M-max equals M-obs plus an increment." The incomplete consideration of the entire model family for the frequency-magnitude distribution neglects, however, the scenario of a large so far unobserved earthquake.
In this paper, we examine conditioning of the discretization of the Helmholtz problem. Although the discrete Helmholtz problem has been studied from different perspectives, to the best of our knowledge, there is no conditioning analysis for it. We aim to fill this gap in the literature. We propose a novel method in 1D to observe the near-zero eigenvalues of a symmetric indefinite matrix. Standard classification of ill-conditioning based on the matrix condition number is not true for the discrete Helmholtz problem. We relate the ill-conditioning of the discretization of the Helmholtz problem with the condition number of the matrix. We carry out analytical conditioning analysis in 1D and extend our observations to 2D with numerical observations. We examine several discretizations. We find different regions in which the condition number of the problem shows different characteristics. We also explain the general behavior of the solutions in these regions.
An explicit Dobrushin uniqueness region for Gibbs point processes with repulsive interactions
(2022)
We present a uniqueness result for Gibbs point processes with interactions that come from a non-negative pair potential; in particular, we provide an explicit uniqueness region in terms of activity z and inverse temperature beta. The technique used relies on applying to the continuous setting the classical Dobrushin criterion. We also present a comparison to the two other uniqueness methods of cluster expansion and disagreement percolation, which can also be applied for this type of interaction.
Symmetric, elegantly entangled structures are a curious mathematical construction that has found their way into the heart of the chemistry lab and the toolbox of constructive geometry. Of particular interest are those structures—knots, links and weavings—which are composed locally of simple twisted strands and are globally symmetric. This paper considers the symmetric tangling of multiple 2-periodic honeycomb networks. We do this using a constructive methodology borrowing elements of graph theory, low-dimensional topology and geometry. The result is a wide-ranging enumeration of symmetric tangled honeycomb networks, providing a foundation for their exploration in both the chemistry lab and the geometers toolbox.
Conventional embeddings of the edge-graphs of Platonic polyhedra, {f,z}, where f,z denote the number of edges in each face and the edge-valence at each vertex, respectively, are untangled in that they can be placed on a sphere (S-2) such that distinct edges do not intersect, analogous to unknotted loops, which allow crossing-free drawings of S-1 on the sphere. The most symmetric (flag-transitive) realizations of those polyhedral graphs are those of the classical Platonic polyhedra, whose symmetries are *2fz, according to Conway's two-dimensional (2D) orbifold notation (equivalent to Schonflies symbols I-h, O-h, and T-d). Tangled Platonic {f,z} polyhedra-which cannot lie on the sphere without edge-crossings-are constructed as windings of helices with three, five, seven,... strands on multigenus surfaces formed by tubifying the edges of conventional Platonic polyhedra, have (chiral) symmetries 2fz (I, O, and T), whose vertices, edges, and faces are symmetrically identical, realized with two flags. The analysis extends to the "theta(z)" polyhedra, {2,z}. The vertices of these symmetric tangled polyhedra overlap with those of the Platonic polyhedra; however, their helicity requires curvilinear (or kinked) edges in all but one case. We show that these 2fz polyhedral tangles are maximally symmetric; more symmetric embeddings are necessarily untangled. On one hand, their topologies are very constrained: They are either self-entangled graphs (analogous to knots) or mutually catenated entangled compound polyhedra (analogous to links). On the other hand, an endless variety of entanglements can be realized for each topology. Simpler examples resemble patterns observed in synthetic organometallic materials and clathrin coats in vivo.
Subdividing space through interfaces leads to many space partitions that are relevant to soft matter self-assembly. Prominent examples include cellular media, e.g. soap froths, which are bubbles of air separated by interfaces of soap and water, but also more complex partitions such as bicontinuous minimal surfaces.
Using computer simulations, this thesis analyses soft matter systems in terms of the relationship between the physical forces between the system's constituents and the structure of the resulting interfaces or partitions. The focus is on two systems, copolymeric self-assembly and the so-called Quantizer problem, where the driving force of structure formation, the minimisation of the free-energy, is an interplay of surface area minimisation and stretching contributions, favouring cells of uniform thickness.
In the first part of the thesis we address copolymeric phase formation with sharp interfaces. We analyse a columnar copolymer system "forced" to assemble on a spherical surface, where the perfect solution, the hexagonal tiling, is topologically prohibited. For a system of three-armed copolymers, the resulting structure is described by solutions of the so-called Thomson problem, the search of minimal energy configurations of repelling charges on a sphere. We find three intertwined Thomson problem solutions on a single sphere, occurring at a probability depending on the radius of the substrate.
We then investigate the formation of amorphous and crystalline structures in the Quantizer system, a particulate model with an energy functional without surface tension that favours spherical cells of equal size. We find that quasi-static equilibrium cooling allows the Quantizer system to crystallise into a BCC ground state, whereas quenching and non-equilibrium cooling, i.e. cooling at slower rates then quenching, leads to an approximately hyperuniform, amorphous state. The assumed universality of the latter, i.e. independence of energy minimisation method or initial configuration, is strengthened by our results. We expand the Quantizer system by introducing interface tension, creating a model that we find to mimic polymeric micelle systems: An order-disorder phase transition is observed with a stable Frank-Caspar phase.
The second part considers bicontinuous partitions of space into two network-like domains, and introduces an open-source tool for the identification of structures in electron microscopy images. We expand a method of matching experimentally accessible projections with computed projections of potential structures, introduced by Deng and Mieczkowski (1998). The computed structures are modelled using nodal representations of constant-mean-curvature surfaces. A case study conducted on etioplast cell membranes in chloroplast precursors establishes the double Diamond surface structure to be dominant in these plant cells. We automate the matching process employing deep-learning methods, which manage to identify structures with excellent accuracy.
Model-informed precision dosing (MIPD) is a quantitative dosing framework that combines prior knowledge on the drug-disease-patient system with patient data from therapeutic drug/ biomarker monitoring (TDM) to support individualized dosing in ongoing treatment. Structural models and prior parameter distributions used in MIPD approaches typically build on prior clinical trials that involve only a limited number of patients selected according to some exclusion/inclusion criteria. Compared to the prior clinical trial population, the patient population in clinical practice can be expected to also include altered behavior and/or increased interindividual variability, the extent of which, however, is typically unknown. Here, we address the question of how to adapt and refine models on the level of the model parameters to better reflect this real-world diversity. We propose an approach for continued learning across patients during MIPD using a sequential hierarchical Bayesian framework. The approach builds on two stages to separate the update of the individual patient parameters from updating the population parameters. Consequently, it enables continued learning across hospitals or study centers, because only summary patient data (on the level of model parameters) need to be shared, but no individual TDM data. We illustrate this continued learning approach with neutrophil-guided dosing of paclitaxel. The present study constitutes an important step toward building confidence in MIPD and eventually establishing MIPD increasingly in everyday therapeutic use.
The motivation for this work was the question of reliability and robustness of seismic tomography. The problem is that many earth models exist which can describe the underlying ground motion records equally well. Most algorithms for reconstructing earth models provide a solution, but rarely quantify their variability. If there is no way to verify the imaged structures, an interpretation is hardly reliable. The initial idea was to explore the space of equivalent earth models using Bayesian inference. However, it quickly became apparent that the rigorous quantification of tomographic uncertainties could not be accomplished within the scope of a dissertation.
In order to maintain the fundamental concept of statistical inference, less complex problems from the geosciences are treated instead. This dissertation aims to anchor Bayesian inference more deeply in the geosciences and to transfer knowledge from applied mathematics. The underlying idea is to use well-known methods and techniques from statistics to quantify the uncertainties of inverse problems in the geosciences. This work is divided into three parts:
Part I introduces the necessary mathematics and should be understood as a kind of toolbox. With a physical application in mind, this section provides a compact summary of all methods and techniques used. The introduction of Bayesian inference makes the beginning. Then, as a special case, the focus is on regression with Gaussian processes under linear transformations. The chapters on the derivation of covariance functions and the approximation of non-linearities are discussed in more detail.
Part II presents two proof of concept studies in the field of seismology. The aim is to present the conceptual application of the introduced methods and techniques with moderate complexity. The example about traveltime tomography applies the approximation of non-linear relationships. The derivation of a covariance function using the wave equation is shown in the example of a damped vibrating string. With these two synthetic applications, a consistent concept for the quantification of modeling uncertainties has been developed.
Part III presents the reconstruction of the Earth's archeomagnetic field. This application uses the whole toolbox presented in Part I and is correspondingly complex. The modeling of the past 1000 years is based on real data and reliably quantifies the spatial modeling uncertainties. The statistical model presented is widely used and is under active development.
The three applications mentioned are intentionally kept flexible to allow transferability to similar problems. The entire work focuses on the non-uniqueness of inverse problems in the geosciences. It is intended to be of relevance to those interested in the concepts of Bayesian inference.
The echo chamber model describes the development of groups in heterogeneous social networks. By heterogeneous social network we mean a set of individuals, each of whom represents exactly one opinion. The existing relationships between individuals can then be represented by a graph. The echo chamber model is a time-discrete model which, like a board game, is played in rounds. In each round, an existing relationship is randomly and uniformly selected from the network and the two connected individuals interact. If the opinions of the individuals involved are sufficiently similar, they continue to move closer together in their opinions, whereas in the case of opinions that are too far apart, they break off their relationship and one of the individuals seeks a new relationship. In this paper we examine the building blocks of this model. We start from the observation that changes in the structure of relationships in the network can be described by a system of interacting particles in a more abstract space.
These reflections lead to the definition of a new abstract graph that encompasses all possible relational configurations of the social network. This provides us with the geometric understanding necessary to analyse the dynamic components of the echo chamber model in Part III. As a first step, in Part 7, we leave aside the opinions of the inidividuals and assume that the position of the edges changes with each move as described above, in order to obtain a basic understanding of the underlying dynamics. Using Markov chain theory, we find upper bounds on the speed of convergence of an associated Markov chain to its unique stationary distribution and show that there are mutually identifiable networks that are not apparent in the dynamics under analysis, in the sense that the stationary distribution of the associated Markov chain gives equal weight to these networks.
In the reversible cases, we focus in particular on the explicit form of the stationary distribution as well as on the lower bounds of the Cheeger constant to describe the convergence speed.
The final result of Section 8, based on absorbing Markov chains, shows that in a reduced version of the echo chamber model, a hierarchical structure of the number of conflicting relations can be identified.
We can use this structure to determine an upper bound on the expected absorption time, using a quasi-stationary distribution. This hierarchy of structure also provides a bridge to classical theories of pure death processes. We conclude by showing how future research can exploit this link and by discussing the importance of the results as building blocks for a full theoretical understanding of the echo chamber model. Finally, Part IV presents a published paper on the birth-death process with partial catastrophe. The paper is based on the explicit calculation of the first moment of a catastrophe. This first part is entirely based on an analytical approach to second degree recurrences with linear coefficients. The convergence to 0 of the resulting sequence as well as the speed of convergence are proved. On the other hand, the determination of the upper bounds of the expected value of the population size as well as its variance and the difference between the determined upper bound and the actual value of the expected value. For these results we use almost exclusively the theory of ordinary nonlinear differential equations.
The geomagnetic main field is vital for live on Earth, as it shields our habitat against the solar wind and cosmic rays. It is generated by the geodynamo in the Earth’s outer core and has a rich dynamic on various timescales. Global models of the field are used to study the interaction of the field and incoming charged particles, but also to infer core dynamics and to feed numerical simulations of the geodynamo. Modern satellite missions, such as the SWARM or the CHAMP mission, support high resolution reconstructions of the global field. From the 19 th century on, a global network of magnetic observatories has been established. It is growing ever since and global models can be constructed from the data it provides. Geomagnetic field models that extend further back in time rely on indirect observations of the field, i.e. thermoremanent records such as burnt clay or volcanic rocks and sediment records from lakes and seas. These indirect records come with (partially very large) uncertainties, introduced by the complex measurement methods and the dating procedure.
Focusing on thermoremanent records only, the aim of this thesis is the development of a new modeling strategy for the global geomagnetic field during the Holocene, which takes the uncertainties into account and produces realistic estimates of the reliability of the model. This aim is approached by first considering snapshot models, in order to address the irregular spatial distribution of the records and the non-linear relation of the indirect observations to the field itself. In a Bayesian setting, a modeling algorithm based on Gaussian process regression is developed and applied to binned data. The modeling algorithm is then extended to the temporal domain and expanded to incorporate dating uncertainties. Finally, the algorithm is sequentialized to deal with numerical challenges arising from the size of the Holocene dataset.
The central result of this thesis, including all of the aspects mentioned, is a new global geomagnetic field model. It covers the whole Holocene, back until 12000 BCE, and we call it ArchKalmag14k. When considering the uncertainties that are produced together with the model, it is evident that before 6000 BCE the thermoremanent database is not sufficient to support global models. For times more recent, ArchKalmag14k can be used to analyze features of the field under consideration of posterior uncertainties. The algorithm for generating ArchKalmag14k can be applied to different datasets and is provided to the community as an open source python package.
Symmetric, elegantly entangled structures are a curious mathematical construction that has found their way into the heart of the chemistry lab and the toolbox of constructive geometry. Of particular interest are those structures—knots, links and weavings—which are composed locally of simple twisted strands and are globally symmetric. This paper considers the symmetric tangling of multiple 2-periodic honeycomb networks. We do this using a constructive methodology borrowing elements of graph theory, low-dimensional topology and geometry. The result is a wide-ranging enumeration of symmetric tangled honeycomb networks, providing a foundation for their exploration in both the chemistry lab and the geometers toolbox.
The index theorem for elliptic operators on a closed Riemannian manifold by Atiyah and Singer has many applications in analysis, geometry and topology, but it is not suitable for a generalization to a Lorentzian setting.
In the case where a boundary is present Atiyah, Patodi and Singer provide an index theorem for compact Riemannian manifolds by introducing non-local boundary conditions obtained via the spectral decomposition of an induced boundary operator, so called APS boundary conditions. Bär and Strohmaier prove a Lorentzian version of this index theorem for the Dirac operator on a manifold with boundary by utilizing results from APS and the characterization of the spectral flow by Phillips. In their case the Lorentzian manifold is assumed to be globally hyperbolic and spatially compact, and the induced boundary operator is given by the Riemannian Dirac operator on a spacelike Cauchy hypersurface. Their results show that imposing APS boundary conditions for these boundary operator will yield a Fredholm operator with a smooth kernel and its index can be calculated by a formula similar to the Riemannian case.
Back in the Riemannian setting, Bär and Ballmann provide an analysis of the most general kind of boundary conditions that can be imposed on a first order elliptic differential operator that will still yield regularity for solutions as well as Fredholm property for the resulting operator. These boundary conditions can be thought of as deformations to the graph of a suitable operator mapping APS boundary conditions to their orthogonal complement.
This thesis aims at applying the boundary conditions found by Bär and Ballmann to a Lorentzian setting to understand more general types of boundary conditions for the Dirac operator, conserving Fredholm property as well as providing regularity results and relative index formulas for the resulting operators. As it turns out, there are some differences in applying these graph-type boundary conditions to the Lorentzian Dirac operator when compared to the Riemannian setting. It will be shown that in contrast to the Riemannian case, going from a Fredholm boundary condition to its orthogonal complement works out fine in the Lorentzian setting. On the other hand, in order to deduce Fredholm property and regularity of solutions for graph-type boundary conditions, additional assumptions for the deformation maps need to be made.
The thesis is organized as follows. In chapter 1 basic facts about Lorentzian and Riemannian spin manifolds, their spinor bundles and the Dirac operator are listed. These will serve as a foundation to define the setting and prove the results of later chapters.
Chapter 2 defines the general notion of boundary conditions for the Dirac operator used in this thesis and introduces the APS boundary conditions as well as their graph type deformations. Also the role of the wave evolution operator in finding Fredholm boundary conditions is analyzed and these boundary conditions are connected to notion of Fredholm pairs in a given Hilbert space.
Chapter 3 focuses on the principal symbol calculation of the wave evolution operator and the results are used to proof Fredholm property as well as regularity of solutions for suitable graph-type boundary conditions. Also sufficient conditions are derived for (pseudo-)local boundary conditions imposed on the Dirac operator to yield a Fredholm operator with a smooth solution space.
In the last chapter 4, a few examples of boundary conditions are calculated applying the results of previous chapters. Restricting to special geometries and/or boundary conditions, results can be obtained that are not covered by the more general statements, and it is shown that so-called transmission conditions behave very differently than in the Riemannian setting.
We study boundary value problems for first-order elliptic differential operators on manifolds with compact boundary. The adapted boundary operator need not be selfadjoint and the boundary condition need not be pseudo-local.We show the equivalence of various characterisations of elliptic boundary conditions and demonstrate how the boundary conditions traditionally considered in the literature fit in our framework. The regularity of the solutions up to the boundary is proven. We show that imposing elliptic boundary conditions yields a Fredholm operator if the manifold is compact. We provide examples which are conveniently treated by our methods.
Dynamical models make specific assumptions about cognitive processes that generate human behavior. In data assimilation, these models are tested against timeordered data. Recent progress on Bayesian data assimilation demonstrates that this approach combines the strengths of statistical modeling of individual differences with the those of dynamical cognitive models.
We discuss Neumann problems for self-adjoint Laplacians on (possibly infinite) graphs. Under the assumption that the heat semigroup is ultracontractive we discuss the unique solvability for non-empty subgraphs with respect to the vertex boundary and provide analytic and probabilistic representations for Neumann solutions. A second result deals with Neumann problems on canonically compactifiable graphs with respect to the Royden boundary and provides conditions for unique solvability and analytic and probabilistic representations.
We show that local deformations, near closed subsets, of solutions to open partial differential relations can be extended to global deformations, provided all but the highest derivatives stay constant along the subset. The applicability of this general result is illustrated by a number of examples, dealing with convex embeddings of hypersurfaces, differential forms, and lapse functions in Lorentzian geometry.
The main application is a general approximation result by sections that have very restrictive local properties on open dense subsets. This shows, for instance, that given any K is an element of Double-struck capital R every manifold of dimension at least 2 carries a complete C-1,C- 1-metric which, on a dense open subset, is smooth with constant sectional curvature K. Of course, this is impossible for C-2-metrics in general.
We present a technique for the enumeration of all isotopically distinct ways of tiling a hyperbolic surface of finite genus, possibly nonorientable and with punctures and boundary. This generalizes the enumeration using Delaney--Dress combinatorial tiling theory of combinatorial classes of tilings to isotopy classes of tilings. To accomplish this, we derive an action of the mapping class group of the orbifold associated to the symmetry group of a tiling on the set of tilings. We explicitly give descriptions and presentations of semipure mapping class groups and of tilings as decorations on orbifolds. We apply this enumerative result to generate an array of isotopically distinct tilings of the hyperbolic plane with symmetries generated by rotations that are commensurate with the threedimensional symmetries of the primitive, diamond, and gyroid triply periodic minimal surfaces, which have relevance to a variety of physical systems.
Randomised one-step time integration methods for deterministic operator differential equations
(2022)
Uncertainty quantification plays an important role in problems that involve inferring a parameter of an initial value problem from observations of the solution. Conrad et al. (Stat Comput 27(4):1065-1082, 2017) proposed randomisation of deterministic time integration methods as a strategy for quantifying uncertainty due to the unknown time discretisation error. We consider this strategy for systems that are described by deterministic, possibly time-dependent operator differential equations defined on a Banach space or a Gelfand triple. Our main results are strong error bounds on the random trajectories measured in Orlicz norms, proven under a weaker assumption on the local truncation error of the underlying deterministic time integration method. Our analysis establishes the theoretical validity of randomised time integration for differential equations in infinite-dimensional settings.
Variational bayesian inference for nonlinear hawkes process with gaussian process self-effects
(2022)
Traditionally, Hawkes processes are used to model time-continuous point processes with history dependence. Here, we propose an extended model where the self-effects are of both excitatory and inhibitory types and follow a Gaussian Process. Whereas previous work either relies on a less flexible parameterization of the model, or requires a large amount of data, our formulation allows for both a flexible model and learning when data are scarce. We continue the line of work of Bayesian inference for Hawkes processes, and derive an inference algorithm by performing inference on an aggregated sum of Gaussian Processes. Approximate Bayesian inference is achieved via data augmentation, and we describe a mean-field variational inference approach to learn the model parameters. To demonstrate the flexibility of the model we apply our methodology on data from different domains and compare it to previously reported results.
A rigorous construction of the supersymmetric path integral associated to a compact spin manifold
(2022)
We give a rigorous construction of the path integral in N = 1/2 supersymmetry as an integral map for differential forms on the loop space of a compact spin manifold. It is defined on the space of differential forms which can be represented by extended iterated integrals in the sense of Chen and Getzler-Jones-Petrack. Via the iterated integral map, we compare our path integral to the non-commutative loop space Chern character of Guneysu and the second author. Our theory provides a rigorous background to various formal proofs of the Atiyah-Singer index theorem for twisted Dirac operators using supersymmetric path integrals, as investigated by Alvarez-Gaume, Atiyah, Bismut and Witten.
Background
Cytochrome P450 (CYP) 3A contributes to the metabolism of many approved drugs. CYP3A perpetrator drugs can profoundly alter the exposure of CYP3A substrates. However, effects of such drug-drug interactions are usually reported as maximum effects rather than studied as time-dependent processes. Identification of the time course of CYP3A modulation can provide insight into when significant changes to CYP3A activity occurs, help better design drug-drug interaction studies, and manage drug-drug interactions in clinical practice.
Objective
We aimed to quantify the time course and extent of the in vivo modulation of different CYP3A perpetrator drugs on hepatic CYP3A activity and distinguish different modulatory mechanisms by their time of onset, using pharmacologically inactive intravenous microgram doses of the CYP3A-specific substrate midazolam, as a marker of CYP3A activity.
Methods
Twenty-four healthy individuals received an intravenous midazolam bolus followed by a continuous infusion for 10 or 36 h. Individuals were randomized into four arms: within each arm, two individuals served as a placebo control and, 2 h after start of the midazolam infusion, four individuals received the CYP3A perpetrator drug: voriconazole (inhibitor, orally or intravenously), rifampicin (inducer, orally), or efavirenz (activator, orally). After midazolam bolus administration, blood samples were taken every hour (rifampicin arm) or every 15 min (remaining study arms) until the end of midazolam infusion. A total of 1858 concentrations were equally divided between midazolam and its metabolite, 1'-hydroxymidazolam. A nonlinear mixed-effects population pharmacokinetic model of both compounds was developed using NONMEM (R). CYP3A activity modulation was quantified over time, as the relative change of midazolam clearance encountered by the perpetrator drug, compared to the corresponding clearance value in the placebo arm.
Results
Time course of CYP3A modulation and magnitude of maximum effect were identified for each perpetrator drug. While efavirenz CYP3A activation was relatively fast and short, reaching a maximum after approximately 2-3 h, the induction effect of rifampicin could only be observed after 22 h, with a maximum after approximately 28-30 h followed by a steep drop to almost baseline within 1-2 h. In contrast, the inhibitory impact of both oral and intravenous voriconazole was prolonged with a steady inhibition of CYP3A activity followed by a gradual increase in the inhibitory effect until the end of sampling at 8 h. Relative maximum clearance changes were +59.1%, +46.7%, -70.6%, and -61.1% for efavirenz, rifampicin, oral voriconazole, and intravenous voriconazole, respectively.
Conclusions
We could distinguish between different mechanisms of CYP3A modulation by the time of onset. Identification of the time at which clearance significantly changes, per perpetrator drug, can guide the design of an optimal sampling schedule for future drug-drug interaction studies. The impact of a short-term combination of different perpetrator drugs on the paradigm CYP3A substrate midazolam was characterized and can define combination intervals in which no relevant interaction is to be expected.
Ulcerative colitis (UC) is part of the inflammatory bowels diseases, and moderate to severe UC patients can be treated with anti-tumour necrosis alpha monoclonal antibodies, including infliximab (IFX). Even though treatment of UC patients by IFX has been in place for over a decade, many gaps in modelling of IFX PK in this population remain. This is even more true for acute severe UC (ASUC) patients for which early prediction of IFX pharmacokinetic (PK) could highly improve treatment outcome. Thus, this review aims to compile and analyse published population PK models of IFX in UC and ASUC patients, and to assess the current knowledge on disease activity impact on IFX PK. For this, a semi-systematic literature search was conducted, from which 26 publications including a population PK model analysis of UC patients receiving IFX therapy were selected. Amongst those, only four developed a model specifically for UC patients, and only three populations included severe UC patients. Investigations of disease activity impact on PK were reported in only 4 of the 14 models selected. In addition, the lack of reported model codes and assessment of predictive performance make the use of published models in a clinical setting challenging. Thus, more comprehensive investigation of PK in UC and ASUC is needed as well as more adequate reports on developed models and their evaluation in order to apply them in a clinical setting.
We propose a global geomagnetic field model for the last 14 thousand years, based on thermoremanent records. We call the model ArchKalmag14k. ArchKalmag14k is constructed by modifying recently proposed algorithms, based on space-time correlations. Due to the amount of data and complexity of the model, the full Bayesian posterior is numerically intractable. To tackle this, we sequentialize the inversion by implementing a Kalman-filter with a fixed time step. Every step consists of a prediction, based on a degree dependent temporal covariance, and a correction via Gaussian process regression. Dating errors are treated via a noisy input formulation. Cross correlations are reintroduced by a smoothing algorithm and model parameters are inferred from the data. Due to the specific statistical nature of the proposed algorithms, the model comes with space and time-dependent uncertainty estimates. The new model ArchKalmag14k shows less variation in the large-scale degrees than comparable models. Local predictions represent the underlying data and agree with comparable models, if the location is sampled well. Uncertainties are bigger for earlier times and in regions of sparse data coverage. We also use ArchKalmag14k to analyze the appearance and evolution of the South Atlantic anomaly together with reverse flux patches at the core-mantle boundary, considering the model uncertainties. While we find good agreement with earlier models for recent times, our model suggests a different evolution of intensity minima prior to 1650 CE. In general, our results suggest that prior to 6000 BCE the data is not sufficient to support global models.
Let X be an infinite linearly ordered set and let Y be a nonempty subset of X. We calculate the relative rank of the semigroup OP(X,Y) of all orientation-preserving transformations on X with restricted range Y modulo the semigroup O(X,Y) of all order-preserving transformations on X with restricted range Y. For Y = X, we characterize the relative generating sets of minimal size.
Alpine ecosystems on the Tibetan Plateau are being threatened by ongoing climate warming and intensified human activities. Ecological time-series obtained from sedimentary ancient DNA (sedaDNA) are essential for understanding past ecosystem and biodiversity dynamics on the Tibetan Plateau and their responses to climate change at a high taxonomic resolution. Hitherto only few but promising studies have been published on this topic. The potential and limitations of using sedaDNA on the Tibetan Plateau are not fully understood. Here, we (i) provide updated knowledge of and a brief introduction to the suitable archives, region-specific taphonomy, state-of-the-art methodologies, and research questions of sedaDNA on the Tibetan Plateau; (ii) review published and ongoing sedaDNA studies from the Tibetan Plateau; and (iii) give some recommendations for future sedaDNA study designs. Based on the current knowledge of taphonomy, we infer that deep glacial lakes with freshwater and high clay sediment input, such as those from the southern and southeastern Tibetan Plateau, may have a high potential for sedaDNA studies. Metabarcoding (for microorganisms and plants), metagenomics (for ecosystems), and hybridization capture (for prehistoric humans) are three primary sedaDNA approaches which have been successfully applied on the Tibetan Plateau, but their power is still limited by several technical issues, such as PCR bias and incompleteness of taxonomic reference databases. Setting up high-quality and open-access regional taxonomic reference databases for the Tibetan Plateau should be given priority in the future. To conclude, the archival, taphonomic, and methodological conditions of the Tibetan Plateau are favorable for performing sedaDNA studies. More research should be encouraged to address questions about long-term ecological dynamics at ecosystem scale and to bring the paleoecology of the Tibetan Plateau into a new era.
In this work, we present Raman lidar data (from a Nd:YAG operating at 355 nm, 532 nm and 1064 nm) from the international research village Ny-Alesund for the time period of January to April 2020 during the Arctic haze season of the MOSAiC winter. We present values of the aerosol backscatter, the lidar ratio and the backscatter Angstrom exponent, though the latter depends on wavelength. The aerosol polarization was generally below 2%, indicating mostly spherical particles. We observed that events with high backscatter and high lidar ratio did not coincide. In fact, the highest lidar ratios (LR > 75 sr at 532 nm) were already found by January and may have been caused by hygroscopic growth, rather than by advection of more continental aerosol. Further, we performed an inversion of the lidar data to retrieve a refractive index and a size distribution of the aerosol. Our results suggest that in the free troposphere (above approximate to 2500 m) the aerosol size distribution is quite constant in time, with dominance of small particles with a modal radius well below 100 nm. On the contrary, below approximate to 2000 m in altitude, we frequently found gradients in aerosol backscatter and even size distribution, sometimes in accordance with gradients of wind speed, humidity or elevated temperature inversions, as if the aerosol was strongly modified by vertical displacement in what we call the "mechanical boundary layer". Finally, we present an indication that additional meteorological soundings during MOSAiC campaign did not necessarily improve the fidelity of air backtrajectories.
We introduce the class of "smooth rough paths" and study their main properties. Working in a smooth setting allows us to discard sewing arguments and focus on algebraic and geometric aspects. Specifically, a Maurer-Cartan perspective is the key to a purely algebraic form of Lyons' extension theorem, the renormalization of rough paths following up on [Bruned et al.: A rough path perspective on renormalization, J. Funct. Anal. 277(11), 2019], as well as a related notion of "sum of rough paths". We first develop our ideas in a geometric rough path setting, as this best resonates with recent works on signature varieties, as well as with the renormalization of geometric rough paths. We then explore extensions to the quasi-geometric and the more general Hopf algebraic setting.
We construct and examine the prototype of a deep learning-based ground-motion model (GMM) that is both fully data driven and nonergodic. We formulate ground-motion modeling as an image processing task, in which a specific type of neural network, the U-Net, relates continuous, horizontal maps of earthquake predictive parameters to sparse observations of a ground-motion intensity measure (IM). The processing of map-shaped data allows the natural incorporation of absolute earthquake source and observation site coordinates, and is, therefore, well suited to include site-, source-, and path-specific amplification effects in a nonergodic GMM. Data-driven interpolation of the IM between observation points is an inherent feature of the U-Net and requires no a priori assumptions. We evaluate our model using both a synthetic dataset and a subset of observations from the KiK-net strong motion network in the Kanto basin in Japan. We find that the U-Net model is capable of learning the magnitude???distance scaling, as well as site-, source-, and path-specific amplification effects from a strong motion dataset. The interpolation scheme is evaluated using a fivefold cross validation and is found to provide on average unbiased predictions. The magnitude???distance scaling as well as the site amplification of response spectral acceleration at a period of 1 s obtained for the Kanto basin are comparable to previous regional studies.
Model uncertainty quantification is an essential component of effective data assimilation. Model errors associated with sub-grid scale processes are often represented through stochastic parameterizations of the unresolved process. Many existing Stochastic Parameterization schemes are only applicable when knowledge of the true sub-grid scale process or full observations of the coarse scale process are available, which is typically not the case in real applications. We present a methodology for estimating the statistics of sub-grid scale processes for the more realistic case that only partial observations of the coarse scale process are available. Model error realizations are estimated over a training period by minimizing their conditional sum of squared deviations given some informative covariates (e.g., state of the system), constrained by available observations and assuming that the observation errors are smaller than the model errors. From these realizations a conditional probability distribution of additive model errors given these covariates is obtained, allowing for complex non-Gaussian error structures. Random draws from this density are then used in actual ensemble data assimilation experiments. We demonstrate the efficacy of the approach through numerical experiments with the multi-scale Lorenz 96 system using both small and large time scale separations between slow (coarse scale) and fast (fine scale) variables. The resulting error estimates and forecasts obtained with this new method are superior to those from two existing methods.
We study superharmonic functions for Schrodinger operators on general weighted graphs. Specifically, we prove two decompositions which both go under the name Riesz decomposition in the literature. The first one decomposes a superharmonic function into a harmonic and a potential part. The second one decomposes a superharmonic function into a sum of superharmonic functions with certain upper bounds given by prescribed superharmonic functions. As application we show a Brelot type theorem.
We adapt the Faddeev-LeVerrier algorithm for the computation of characteristic polynomials to the computation of the Pfaffian of a skew-symmetric matrix. This yields a very simple, easy to implement and parallelize algorithm of computational cost O(n(beta+1)) where nis the size of the matrix and O(n(beta)) is the cost of multiplying n x n-matrices, beta is an element of [2, 2.37286). We compare its performance to that of other algorithms and show how it can be used to compute the Euler form of a Riemannian manifold using computer algebra.
In this short survey article, we showcase a number of non-trivial geometric problems that have recently been resolved by marrying methods from functional calculus and real-variable harmonic analysis. We give a brief description of these methods as well as their interplay. This is a succinct survey that hopes to inspire geometers and analysts alike to study these methods so that they can be further developed to be potentially applied to a broader range of questions.
In the semiclassical limit (h) over bar -> 0, we analyze a class of self-adjoint Schrodinger operators H-(h) over bar = (h) over bar L-2 + (h) over barW + V center dot id(E) acting on sections of a vector bundle E over an oriented Riemannian manifold M where L is a Laplace type operator, W is an endomorphism field and the potential energy V has non-degenerate minima at a finite number of points m(1),... m(r) is an element of M, called potential wells. Using quasimodes of WKB-type near m(j) for eigenfunctions associated with the low lying eigenvalues of H-(h) over bar, we analyze the tunneling effect, i.e. the splitting between low lying eigenvalues, which e.g. arises in certain symmetric configurations. Technically, we treat the coupling between different potential wells by an interaction matrix and we consider the case of a single minimal geodesic (with respect to the associated Agmon metric) connecting two potential wells and the case of a submanifold of minimal geodesics of dimension l + 1. This dimension l determines the polynomial prefactor for exponentially small eigenvalue splitting.
Die Vielfältigkeit des Winkelbegriffs ist gleichermaßen spannend wie herausfordernd in Hinblick auf seine Zugänge im Mathematikunterricht der Schule. Ausgehend von verschiedenen Vorstellungen zum Winkelbegriff wird in dieser Arbeit ein Lehrgang zur Vermittlung des Winkelbegriffs entwickelt und letztlich in konkrete Umsetzungen für den Schulunterricht überführt.
Dabei erfolgt zunächst eine stoffdidaktische Auseinandersetzung mit dem Winkelbegriff, die von einer informationstheoretischen Winkeldefinition begleitet wird. In dieser wird eine Definition für den Winkelbegriff unter der Fragestellung entwickelt, welche Informationen man über einen Winkel benötigt, um ihn beschreiben zu können. So können die in der fachdidaktischen Literatur auftretenden Winkelvorstellungen aus fachmathematischer Perspektive erneut abgeleitet und validiert werden. Parallel dazu wird ein Verfahren beschrieben, wie Winkel – auch unter dynamischen Aspekten – informationstechnisch verarbeitet werden können, so dass Schlussfolgerungen aus der informationstheoretischen Winkeldefinition beispielsweise in dynamischen Geometriesystemen zur Verfügung stehen.
Unter dem Gesichtspunkt, wie eine Abstraktion des Winkelbegriffs im Mathematikunterricht vonstatten gehen kann, werden die Grundvorstellungsidee sowie die Lehrstrategie des Aufsteigens vom Abstrakten zum Konkreten miteinander in Beziehung gesetzt. Aus der Verknüpfung der beiden Theorien wird ein grundsätzlicher Weg abgeleitet, wie im Rahmen der Lehrstrategie eine Ausgangsabstraktion zu einzelnen Winkelaspekten aufgebaut werden kann, was die Generierung von Grundvorstellungen zu den Bestandteilen des jeweiligen Winkelaspekts und zum Operieren mit diesen Begriffsbestandteilen ermöglichen soll. Hierfür wird die Lehrstrategie angepasst, um insbesondere den Übergang von Winkelsituationen zu Winkelkontexten zu realisieren. Explizit für den Aspekt des Winkelfeldes werden, anhand der Untersuchung der Sichtfelder von Tieren, Lernhandlungen und Forderungen an ein Lernmodell beschrieben, die Schülerinnen und Schüler bei der Begriffsaneignung unterstützen.
Die Tätigkeitstheorie, der die genannte Lehrstrategie zuzuordnen ist, zieht sich als roter Faden durch die weitere Arbeit, wenn nun theoriebasiert Designprinzipien generiert werden, die in die Entwicklung einer interaktiven Lernumgebung münden. Hierzu wird u. a. das Modell der Artifact-Centric Activity Theory genutzt, das das Beziehungsgefüge aus Schülerinnen und Schülern, dem mathematischen Gegenstand und einer zu entwickelnden App als vermittelndes Medium beschreibt, wobei der Einsatz der App im Unterrichtskontext sowie deren regelgeleitete Entwicklung Bestandteil des Modells sind. Gemäß dem Ansatz der Fachdidaktischen Entwicklungsforschung wird die Lernumgebung anschließend in mehreren Zyklen erprobt, evaluiert und überarbeitet. Dabei wird ein qualitatives Setting angewandt, das sich der Semiotischen Vermittlung bedient und untersucht, inwiefern sich die Qualität der von den Schülerinnen und Schülern gezeigten Lernhandlungen durch die Designprinzipien und deren Umsetzung erklären lässt. Am Ende der Arbeit stehen eine finale Version der Designprinzipien und eine sich daraus ergebende Lernumgebung zur Einführung des Winkelfeldbegriffs in der vierten Klassenstufe.
This thesis focuses on the study of marked Gibbs point processes, in particular presenting some results on their existence and uniqueness, with ideas and techniques drawn from different areas of statistical mechanics: the entropy method from large deviations theory, cluster expansion and the Kirkwood--Salsburg equations, the Dobrushin contraction principle and disagreement percolation.
We first present an existence result for infinite-volume marked Gibbs point processes. More precisely, we use the so-called entropy method (and large-deviation tools) to construct marked Gibbs point processes in R^d under quite general assumptions. In particular, the random marks belong to a general normed space S and are not bounded. Moreover, we allow for interaction functionals that may be unbounded and whose range is finite but random. The entropy method relies on showing that a family of finite-volume Gibbs point processes belongs to sequentially compact entropy level sets, and is therefore tight.
We then present infinite-dimensional Langevin diffusions, that we put in interaction via a Gibbsian description. In this setting, we are able to adapt the general result above to show the existence of the associated infinite-volume measure. We also study its correlation functions via cluster expansion techniques, and obtain the uniqueness of the Gibbs process for all inverse temperatures β and activities z below a certain threshold. This method relies in first showing that the correlation functions of the process satisfy a so-called Ruelle bound, and then using it to solve a fixed point problem in an appropriate Banach space. The uniqueness domain we obtain consists then of the model parameters z and β for which such a problem has exactly one solution.
Finally, we explore further the question of uniqueness of infinite-volume Gibbs point processes on R^d, in the unmarked setting. We present, in the context of repulsive interactions with a hard-core component, a novel approach to uniqueness by applying the discrete Dobrushin criterion to the continuum framework. We first fix a discretisation parameter a>0 and then study the behaviour of the uniqueness domain as a goes to 0. With this technique we are able to obtain explicit thresholds for the parameters z and β, which we then compare to existing results coming from the different methods of cluster expansion and disagreement percolation.
Throughout this thesis, we illustrate our theoretical results with various examples both from classical statistical mechanics and stochastic geometry.
The propagation of test fields, such as electromagnetic, Dirac or linearized gravity, on a fixed spacetime manifold is often studied by using the geometrical optics approximation. In the limit of infinitely high frequencies, the geometrical optics approximation provides a conceptual transition between the test field and an effective point-particle description. The corresponding point-particles, or wave rays, coincide with the geodesics of the underlying spacetime. For most astrophysical applications of interest, such as the observation of celestial bodies, gravitational lensing, or the observation of cosmic rays, the geometrical optics approximation and the effective point-particle description represent a satisfactory theoretical model. However, the geometrical optics approximation gradually breaks down as test fields of finite frequency are considered.
In this thesis, we consider the propagation of test fields on spacetime, beyond the leading-order geometrical optics approximation. By performing a covariant Wentzel-Kramers-Brillouin analysis for test fields, we show how higher-order corrections to the geometrical optics approximation can be considered. The higher-order corrections are related to the dynamics of the spin internal degree of freedom of the considered test field. We obtain an effective point-particle description, which contains spin-dependent corrections to the geodesic motion obtained using geometrical optics. This represents a covariant generalization of the well-known spin Hall effect, usually encountered in condensed matter physics and in optics. Our analysis is applied to electromagnetic and massive Dirac test fields, but it can easily be extended to other fields, such as linearized gravity. In the electromagnetic case, we present several examples where the gravitational spin Hall effect of light plays an important role. These include the propagation of polarized light rays on black hole spacetimes and cosmological spacetimes, as well as polarization-dependent effects on the shape of black hole shadows. Furthermore, we show that our effective point-particle equations for polarized light rays reproduce well-known results, such as the spin Hall effect of light in an inhomogeneous medium, and the relativistic Hall effect of polarized electromagnetic wave packets encountered in Minkowski spacetime.
Geomagnetic field modeling using spherical harmonics requires the inversion for hundreds to thousands of parameters. This large-scale problem can always be formulated as an optimization problem, where a global minimum of a certain cost function has to be calculated. A variety of approaches is known in order to solve this inverse problem, e.g. derivative-based methods or least-squares methods and their variants. Each of these methods has its own advantages and disadvantages, which affect for example the applicability to non-differentiable functions or the runtime of the corresponding algorithm.
In this work, we pursue the goal to find an algorithm which is faster than the established methods and which is applicable to non-linear problems. Such non-linear problems occur for example when estimating Euler angles or when the more robust L_1 norm is applied. Therefore, we will investigate the usability of stochastic optimization methods from the CMAES family for modeling the geomagnetic field of Earth's core. On one hand, basics of core field modeling and their parameterization are discussed using some examples from the literature. On the other hand, the theoretical background of the stochastic methods are provided. A specific CMAES algorithm was successfully applied in order to invert data of the Swarm satellite mission and to derive the core field model EvoMag. The EvoMag model agrees well with established models and observatory data from Niemegk. Finally, we present some observed difficulties and discuss the results of our model.
The Rarita-Schwinger operator is the twisted Dirac operator restricted to 3/2-spinors. Rarita-Schwinger fields are solutions of this operator which are in addition divergence-free. This is an overdetermined problem and solutions are rare; it is even more unexpected for there to be large dimensional spaces of solutions. In this paper we prove the existence of a sequence of compact manifolds in any given dimension greater than or equal to 4 for which the dimension of the space of Rarita-Schwinger fields tends to infinity. These manifolds are either simply connected Kahler-Einstein spin with negative Einstein constant, or products of such spaces with flat tori. Moreover, we construct Calabi-Yau manifolds of even complex dimension with more linearly independent Rarita-Schwinger fields than flat tori of the same dimension.
Contributions to the theoretical analysis of the algorithms with adversarial and dependent data
(2021)
In this work I present the concentration inequalities of Bernstein's type for the norms of Banach-valued random sums under a general functional weak-dependency assumption (the so-called $\cC-$mixing). The latter is then used to prove, in the asymptotic framework, excess risk upper bounds of the regularised Hilbert valued statistical learning rules under the τ-mixing assumption on the underlying training sample. These results (of the batch statistical setting) are then supplemented with the regret analysis over the classes of Sobolev balls of the type of kernel ridge regression algorithm in the setting of online nonparametric regression with arbitrary data sequences. Here, in particular, a question of robustness of the kernel-based forecaster is investigated. Afterwards, in the framework of sequential learning, the multi-armed bandit problem under $\cC-$mixing assumption on the arm's outputs is considered and the complete regret analysis of a version of Improved UCB algorithm is given. Lastly, probabilistic inequalities of the first part are extended to the case of deviations (both of Azuma-Hoeffding's and of Burkholder's type) to the partial sums of real-valued weakly dependent random fields (under the type of projective dependence condition).
We present a supervised learning method to learn the propagator map of a dynamical system from partial and noisy observations. In our computationally cheap and easy-to-implement framework, a neural network consisting of random feature maps is trained sequentially by incoming observations within a data assimilation procedure. By employing Takens's embedding theorem, the network is trained on delay coordinates. We show that the combination of random feature maps and data assimilation, called RAFDA, outperforms standard random feature maps for which the dynamics is learned using batch data.