Refine
Year of publication
- 2021 (48) (remove)
Document Type
- Article (38)
- Doctoral Thesis (6)
- Bachelor Thesis (2)
- Master's Thesis (2)
Is part of the Bibliography
- yes (48)
Keywords
- Bayesian inverse problems (3)
- data assimilation (3)
- Corona (2)
- Gamma-convergence (2)
- Onsager-Machlup functional (2)
- Verzweigungsprozess (2)
- branching process (2)
- estimation (2)
- kernel methods (2)
- maximum a posteriori (2)
Institute
- Institut für Mathematik (48) (remove)
Nonparametric goodness-of-fit testing for parametric covariate models in pharmacometric analyses
(2021)
The characterization of covariate effects on model parameters is a crucial step during pharmacokinetic/pharmacodynamic analyses. Although covariate selection criteria have been studied extensively, the choice of the functional relationship between covariates and parameters, however, has received much less attention. Often, a simple particular class of covariate-to-parameter relationships (linear, exponential, etc.) is chosen ad hoc or based on domain knowledge, and a statistical evaluation is limited to the comparison of a small number of such classes. Goodness-of-fit testing against a nonparametric alternative provides a more rigorous approach to covariate model evaluation, but no such test has been proposed so far. In this manuscript, we derive and evaluate nonparametric goodness-of-fit tests for parametric covariate models, the null hypothesis, against a kernelized Tikhonov regularized alternative, transferring concepts from statistical learning to the pharmacological setting. The approach is evaluated in a simulation study on the estimation of the age-dependent maturation effect on the clearance of a monoclonal antibody. Scenarios of varying data sparsity and residual error are considered. The goodness-of-fit test correctly identified misspecified parametric models with high power for relevant scenarios. The case study provides proof-of-concept of the feasibility of the proposed approach, which is envisioned to be beneficial for applications that lack well-founded covariate models.
The Arnoldi process can be applied to inexpensively approximate matrix functions of the form f (A)v and matrix functionals of the form v*(f (A))*g(A)v, where A is a large square non-Hermitian matrix, v is a vector, and the superscript * denotes transposition and complex conjugation. Here f and g are analytic functions that are defined in suitable regions in the complex plane. This paper reviews available approximation methods and describes new ones that provide higher accuracy for essentially the same computational effort by exploiting available, but generally not used, moment information. Numerical experiments show that in some cases the modifications of the Arnoldi decompositions proposed can improve the accuracy of v*(f (A))*g(A)v about as much as performing an additional step of the Arnoldi process.
Die Vielfältigkeit des Winkelbegriffs ist gleichermaßen spannend wie herausfordernd in Hinblick auf seine Zugänge im Mathematikunterricht der Schule. Ausgehend von verschiedenen Vorstellungen zum Winkelbegriff wird in dieser Arbeit ein Lehrgang zur Vermittlung des Winkelbegriffs entwickelt und letztlich in konkrete Umsetzungen für den Schulunterricht überführt.
Dabei erfolgt zunächst eine stoffdidaktische Auseinandersetzung mit dem Winkelbegriff, die von einer informationstheoretischen Winkeldefinition begleitet wird. In dieser wird eine Definition für den Winkelbegriff unter der Fragestellung entwickelt, welche Informationen man über einen Winkel benötigt, um ihn beschreiben zu können. So können die in der fachdidaktischen Literatur auftretenden Winkelvorstellungen aus fachmathematischer Perspektive erneut abgeleitet und validiert werden. Parallel dazu wird ein Verfahren beschrieben, wie Winkel – auch unter dynamischen Aspekten – informationstechnisch verarbeitet werden können, so dass Schlussfolgerungen aus der informationstheoretischen Winkeldefinition beispielsweise in dynamischen Geometriesystemen zur Verfügung stehen.
Unter dem Gesichtspunkt, wie eine Abstraktion des Winkelbegriffs im Mathematikunterricht vonstatten gehen kann, werden die Grundvorstellungsidee sowie die Lehrstrategie des Aufsteigens vom Abstrakten zum Konkreten miteinander in Beziehung gesetzt. Aus der Verknüpfung der beiden Theorien wird ein grundsätzlicher Weg abgeleitet, wie im Rahmen der Lehrstrategie eine Ausgangsabstraktion zu einzelnen Winkelaspekten aufgebaut werden kann, was die Generierung von Grundvorstellungen zu den Bestandteilen des jeweiligen Winkelaspekts und zum Operieren mit diesen Begriffsbestandteilen ermöglichen soll. Hierfür wird die Lehrstrategie angepasst, um insbesondere den Übergang von Winkelsituationen zu Winkelkontexten zu realisieren. Explizit für den Aspekt des Winkelfeldes werden, anhand der Untersuchung der Sichtfelder von Tieren, Lernhandlungen und Forderungen an ein Lernmodell beschrieben, die Schülerinnen und Schüler bei der Begriffsaneignung unterstützen.
Die Tätigkeitstheorie, der die genannte Lehrstrategie zuzuordnen ist, zieht sich als roter Faden durch die weitere Arbeit, wenn nun theoriebasiert Designprinzipien generiert werden, die in die Entwicklung einer interaktiven Lernumgebung münden. Hierzu wird u. a. das Modell der Artifact-Centric Activity Theory genutzt, das das Beziehungsgefüge aus Schülerinnen und Schülern, dem mathematischen Gegenstand und einer zu entwickelnden App als vermittelndes Medium beschreibt, wobei der Einsatz der App im Unterrichtskontext sowie deren regelgeleitete Entwicklung Bestandteil des Modells sind. Gemäß dem Ansatz der Fachdidaktischen Entwicklungsforschung wird die Lernumgebung anschließend in mehreren Zyklen erprobt, evaluiert und überarbeitet. Dabei wird ein qualitatives Setting angewandt, das sich der Semiotischen Vermittlung bedient und untersucht, inwiefern sich die Qualität der von den Schülerinnen und Schülern gezeigten Lernhandlungen durch die Designprinzipien und deren Umsetzung erklären lässt. Am Ende der Arbeit stehen eine finale Version der Designprinzipien und eine sich daraus ergebende Lernumgebung zur Einführung des Winkelfeldbegriffs in der vierten Klassenstufe.
Various particle filters have been proposed over the last couple of decades with the common feature that the update step is governed by a type of control law. This feature makes them an attractive alternative to traditional sequential Monte Carlo which scales poorly with the state dimension due to weight degeneracy. This article proposes a unifying framework that allows us to systematically derive the McKean-Vlasov representations of these filters for the discrete time and continuous time observation case, taking inspiration from the smooth approximation of the data considered in [D. Crisan and J. Xiong, Stochastics, 82 (2010), pp. 53-68; J. M. Clark and D. Crisan, Probab. Theory Related Fields, 133 (2005), pp. 43-56]. We consider three filters that have been proposed in the literature and use this framework to derive Ito representations of their limiting forms as the approximation parameter delta -> 0. All filters require the solution of a Poisson equation defined on R-d, for which existence and uniqueness of solutions can be a nontrivial issue. We additionally establish conditions on the signal-observation system that ensures well-posedness of the weighted Poisson equation arising in one of the filters.
The Rarita-Schwinger operator is the twisted Dirac operator restricted to 3/2-spinors. Rarita-Schwinger fields are solutions of this operator which are in addition divergence-free. This is an overdetermined problem and solutions are rare; it is even more unexpected for there to be large dimensional spaces of solutions. In this paper we prove the existence of a sequence of compact manifolds in any given dimension greater than or equal to 4 for which the dimension of the space of Rarita-Schwinger fields tends to infinity. These manifolds are either simply connected Kahler-Einstein spin with negative Einstein constant, or products of such spaces with flat tori. Moreover, we construct Calabi-Yau manifolds of even complex dimension with more linearly independent Rarita-Schwinger fields than flat tori of the same dimension.
Spiele und spieltypische Elemente wie das Sammeln von Treuepunkten sind aus dem Alltag kaum wegzudenken. Zudem werden sie zunehmend in Unternehmen oder in Lernumgebungen eingesetzt. Allerdings ist die Methode Gamification bisher für den pädagogischen Kontext wenig klassifiziert und für Lehrende kaum zugänglich gemacht worden.
Daher zielt diese Bachelorarbeit darauf ab, eine systematische Strukturierung und Aufarbeitung von Gamification sowie innovative Ansätze für die Verwendung spieltypischer Elemente im Unterricht, konkret dem Mathematikunterricht, zu präsentieren. Dies kann eine Grundlage für andere Fachgebiete, aber auch andere Lehrformen bieten und so die Umsetzbarkeit von Gamification in eigenen Lehrveranstaltungen aufzeigen.
In der Arbeit wird begründet, weshalb und mithilfe welcher Elemente Gamification die Motivation und Leistungsbereitschaft der Lernenden langfristig erhöhen, die Sozial- und Personalkompetenzen fördern sowie die Lernenden zu mehr Aktivität anregen kann. Zudem wird Gamification explizit mit grundlegenden mathematikdidaktischen Prinzipien in Verbindung gesetzt und somit die Relevanz für den Mathematikunterricht hervorgehoben.
Anschließend werden die einzelnen Elemente von Gamification wie Punkte, Level, Abzeichen, Charaktere und Rahmengeschichte entlang einer eigens für den pädagogischen Kontext entwickelten Klassifikation „FUN“ (Feedback – User specific elements – Neutral elements) schematisch beschrieben, ihre Funktionen und Wirkung dargestellt sowie Einsatzmöglichkeiten im Unterricht aufgezeigt. Dies beinhaltet Ideen zu lernförderlichem Feedback, Differenzierungsmöglichkeiten und Unterrichtsrahmengestaltung, die in Lehrveranstaltungen aller Art umsetzbar sein können. Die Bachelorarbeit umfasst zudem ein spezifisches Beispiel, einen Unterrichtsentwurf einer gamifizierten Mathematikstunde inklusive des zugehörigen Arbeitsmaterials, anhand dessen die Verwendung von Gamification deutlich wird.
Gamification offeriert oftmals Vorteile gegenüber dem traditionellen Unterricht, muss jedoch wie jede Methode an den Inhalt und die Zielgruppe angepasst werden. Weiterführende Forschung könnte sich mit konkreten motivationalen Strukturen, personenspezifischen Unterschieden sowie mit mathematischen Inhalten wie dem Problemlösen oder dem Wechsel zwischen verschiedenen Darstellungen hinsichtlich gamifizierter Lehrformen beschäftigen.
The Bayesian solution to a statistical inverse problem can be summarised by a mode of the posterior distribution, i.e. a maximum a posteriori (MAP) estimator. The MAP estimator essentially coincides with the (regularised) variational solution to the inverse problem, seen as minimisation of the Onsager-Machlup (OM) functional of the posterior measure. An open problem in the stability analysis of inverse problems is to establish a relationship between the convergence properties of solutions obtained by the variational approach and by the Bayesian approach. To address this problem, we propose a general convergence theory for modes that is based on the Gamma-convergence of OM functionals, and apply this theory to Bayesian inverse problems with Gaussian and edge-preserving Besov priors. Part II of this paper considers more general prior distributions.
We derive Onsager-Machlup functionals for countable product measures on weighted l(p) subspaces of the sequence space R-N. Each measure in the product is a shifted and scaled copy of a reference probability measure on R that admits a sufficiently regular Lebesgue density. We study the equicoercivity and Gamma-convergence of sequences of Onsager-Machlup functionals associated to convergent sequences of measures within this class. We use these results to establish analogous results for probability measures on separable Banach or Hilbert spaces, including Gaussian, Cauchy, and Besov measures with summability parameter 1 <= p <= 2. Together with part I of this paper, this provides a basis for analysis of the convergence of maximum a posteriori estimators in Bayesian inverse problems and most likely paths in transition path theory.
In this short survey article, we showcase a number of non-trivial geometric problems that have recently been resolved by marrying methods from functional calculus and real-variable harmonic analysis. We give a brief description of these methods as well as their interplay. This is a succinct survey that hopes to inspire geometers and analysts alike to study these methods so that they can be further developed to be potentially applied to a broader range of questions.
Forecast verification
(2021)
The philosophy of forecast verification is rather different between deterministic and probabilistic verification metrics: generally speaking, deterministic metrics measure differences, whereas probabilistic metrics assess reliability and sharpness of predictive distributions. This article considers the root-mean-square error (RMSE), which can be seen as a deterministic metric, and the probabilistic metric Continuous Ranked Probability Score (CRPS), and demonstrates that under certain conditions, the CRPS can be mathematically expressed in terms of the RMSE when these metrics are aggregated. One of the required conditions is the normality of distributions. The other condition is that, while the forecast ensemble need not be calibrated, any bias or over/underdispersion cannot depend on the forecast distribution itself. Under these conditions, the CRPS is a fraction of the RMSE, and this fraction depends only on the heteroscedasticity of the ensemble spread and the measures of calibration. The derived CRPS-RMSE relationship for the case of perfect ensemble reliability is tested on simulations of idealised two-dimensional barotropic turbulence. Results suggest that the relationship holds approximately despite the normality condition not being met.
Bayesian inference can be embedded into an appropriately defined dynamics in the space of probability measures. In this paper, we take Brownian motion and its associated Fokker-Planck equation as a starting point for such embeddings and explore several interacting particle approximations. More specifically, we consider both deterministic and stochastic interacting particle systems and combine them with the idea of preconditioning by the empirical covariance matrix. In addition to leading to affine invariant formulations which asymptotically speed up convergence, preconditioning allows for gradient-free implementations in the spirit of the ensemble Kalman filter. While such gradient-free implementations have been demonstrated to work well for posterior measures that are nearly Gaussian, we extend their scope of applicability to multimodal measures by introducing localized gradient-free approximations. Numerical results demonstrate the effectiveness of the considered methodologies.
Identification of unknown parameters on the basis of partial and noisy data is a challenging task, in particular in high dimensional and non-linear settings. Gaussian approximations to the problem, such as ensemble Kalman inversion, tend to be robust and computationally cheap and often produce astonishingly accurate estimations despite the simplifying underlying assumptions. Yet there is a lot of room for improvement, specifically regarding a correct approximation of a non-Gaussian posterior distribution. The tempered ensemble transform particle filter is an adaptive Sequential Monte Carlo (SMC) method, whereby resampling is based on optimal transport mapping. Unlike ensemble Kalman inversion, it does not require any assumptions regarding the posterior distribution and hence has shown to provide promising results for non-linear non-Gaussian inverse problems. However, the improved accuracy comes with the price of much higher computational complexity, and the method is not as robust as ensemble Kalman inversion in high dimensional problems. In this work, we add an entropy-inspired regularisation factor to the underlying optimal transport problem that allows the high computational cost to be considerably reduced via Sinkhorn iterations. Further, the robustness of the method is increased via an ensemble Kalman inversion proposal step before each update of the samples, which is also referred to as a hybrid approach. The promising performance of the introduced method is numerically verified by testing it on a steady-state single-phase Darcy flow model with two different permeability configurations. The results are compared to the output of ensemble Kalman inversion, and Markov chain Monte Carlo methods results are computed as a benchmark.
Im Zuge der Covid-19 Pandemie werden zwei Werte täglich diskutiert: Die zuletzt gemeldete Zahl der neu Infizierten und die sogenannte Reproduktionsrate. Sie gibt wieder, wie viele weitere Menschen ein an Corona erkranktes Individuum im Durchschnitt ansteckt. Für die Schätzung dieses Wertes gibt es viele Möglichkeiten - auch das Robert Koch-Institut gibt in seinem täglichen Situationsbericht stets zwei R-Werte an: Einen 4-Tage-R-Wert und einen weniger schwankenden 7-Tage-R-Wert. Diese Arbeit soll eine weitere Möglichkeit vorstellen, einige Aspekte der Pandemie zu modellieren und die Reproduktionsrate zu schätzen.
In der ersten Hälfte der Arbeit werden die mathematischen Grundlagen vorgestellt, die man für die Modellierung benötigt. Hierbei wird davon ausgegangen, dass der Leser bereits ein Basisverständnis von stochastischen Prozessen hat. Im Abschnitt Grundlagen werden Verzweigungsprozesse mit einigen Beispielen eingeführt und die Ergebnisse aus diesem Themengebiet, die für diese Arbeit wichtig sind, präsentiert. Dabei gehen wir zuerst auf einfache Verzweigungsprozesse ein und erweitern diese dann auf Verzweigungsprozesse mit mehreren Typen. Um die Notation zu erleichtern, beschränken wir uns auf zwei Typen. Das Prinzip lässt sich aber auf eine beliebige Anzahl von Typen erweitern.
Vor allem soll die Wichtigkeit des Parameters λ herausgestellt werden. Dieser Wert kann als durchschnittliche Zahl von Nachfahren eines Individuums interpretiert werden und bestimmt die Dynamik des Prozesses über einen längeren Zeitraum. In der Anwendung auf die Pandemie hat der Parameter λ die gleiche Rolle wie die Reproduktionsrate R.
In der zweiten Hälfte dieser Arbeit stellen wir eine Anwendung der Theorie über Multitype Verzweigungsprozesse vor. Professor Yanev und seine Mitarbeiter modellieren in ihrer Veröffentlichung Branching stochastic processes as models of Covid-19 epidemic development die Ausbreitung des Corona Virus' über einen Verzweigungsprozess mit zwei Typen. Wir werden dieses Modell diskutieren und Schätzer daraus ableiten: Ziel ist es, die Reproduktionsrate zu ermitteln. Außerdem analysieren wir die Möglichkeiten, die Dunkelziffer (die Zahl nicht gemeldeter Krankheitsfälle) zu schätzen. Wir wenden die Schätzer auf die Zahlen von Deutschland an und werten diese schließlich aus.
A characterization of the essential spectrum of Schrodinger operators on infinite graphs is derived involving the concept of R-limits. This concept, which was introduced previously for operators on N and Z(d) as "right-limits," captures the behaviour of the operator at infinity. For graphs with sub-exponential growth rate, we show that each point in sigma(ss)(H) corresponds to a bounded generalized eigenfunction of a corresponding R-limit of H. If, additionally, the graph is of uniform sub-exponential growth, also the converse inclusion holds.
In a previous study, a new snapshot modeling concept for the archeomagnetic field was introduced (Mauerberger et al., 2020, ). By assuming a Gaussian process for the geomagnetic potential, a correlation-based algorithm was presented, which incorporates a closed-form spatial correlation function. This work extends the suggested modeling strategy to the temporal domain. A space-time correlation kernel is constructed from the tensor product of the closed-form spatial correlation kernel with a squared exponential kernel in time. Dating uncertainties are incorporated into the modeling concept using a noisy input Gaussian process. All but one modeling hyperparameters are marginalized, to reduce their influence on the outcome and to translate their variability to the posterior variance. The resulting distribution incorporates uncertainties related to dating, measurement and modeling process. Results from application to archeomagnetic data show less variation in the dipole than comparable models, but are in general agreement with previous findings.
Contributions to the theoretical analysis of the algorithms with adversarial and dependent data
(2021)
In this work I present the concentration inequalities of Bernstein's type for the norms of Banach-valued random sums under a general functional weak-dependency assumption (the so-called $\cC-$mixing). The latter is then used to prove, in the asymptotic framework, excess risk upper bounds of the regularised Hilbert valued statistical learning rules under the τ-mixing assumption on the underlying training sample. These results (of the batch statistical setting) are then supplemented with the regret analysis over the classes of Sobolev balls of the type of kernel ridge regression algorithm in the setting of online nonparametric regression with arbitrary data sequences. Here, in particular, a question of robustness of the kernel-based forecaster is investigated. Afterwards, in the framework of sequential learning, the multi-armed bandit problem under $\cC-$mixing assumption on the arm's outputs is considered and the complete regret analysis of a version of Improved UCB algorithm is given. Lastly, probabilistic inequalities of the first part are extended to the case of deviations (both of Azuma-Hoeffding's and of Burkholder's type) to the partial sums of real-valued weakly dependent random fields (under the type of projective dependence condition).
We present a supervised learning method to learn the propagator map of a dynamical system from partial and noisy observations. In our computationally cheap and easy-to-implement framework, a neural network consisting of random feature maps is trained sequentially by incoming observations within a data assimilation procedure. By employing Takens's embedding theorem, the network is trained on delay coordinates. We show that the combination of random feature maps and data assimilation, called RAFDA, outperforms standard random feature maps for which the dynamics is learned using batch data.
We provide an overview of the tools and techniques of resurgence theory used in the Borel-ecalle resummation method, which we then apply to the massless Wess-Zumino model. Starting from already known results on the anomalous dimension of the Wess-Zumino model, we solve its renormalisation group equation for the two-point function in a space of formal series. We show that this solution is 1-Gevrey and that its Borel transform is resurgent. The Schwinger-Dyson equation of the model is then used to prove an asymptotic exponential bound for the Borel transformed two-point function on a star-shaped domain of a suitable ramified complex plane. This proves that the two-point function of the Wess-Zumino model is Borel-ecalle summable.
While patients are known to respond differently to drug therapies, current clinical practice often still follows a standardized dosage regimen for all patients. For drugs with a narrow range of both effective and safe concentrations, this approach may lead to a high incidence of adverse events or subtherapeutic dosing in the presence of high patient variability. Model-informedprecision dosing (MIPD) is a quantitative approach towards dose individualization based on mathematical modeling of dose-response relationships integrating therapeutic drug/biomarker monitoring (TDM) data. MIPD may considerably improve the efficacy and safety of many drug therapies. Current MIPD approaches, however, rely either on pre-calculated dosing tables or on simple point predictions of the therapy outcome. These
approaches lack a quantification of uncertainties and the ability to account for effects that are delayed. In addition, the underlying models are not improved while applied to patient data. Therefore, current approaches are not well suited for informed clinical decision-making based on a differentiated understanding of the individually predicted therapy outcome.
The objective of this thesis is to develop mathematical approaches for MIPD, which (i) provide efficient fully Bayesian forecasting of the individual therapy outcome including associated uncertainties, (ii) integrate Markov decision processes via reinforcement learning (RL) for a comprehensive decision framework for dose individualization, (iii) allow for continuous learning across patients and hospitals. Cytotoxic anticancer chemotherapy with its major dose-limiting toxicity, neutropenia, serves as a therapeutically relevant application example.
For more comprehensive therapy forecasting, we apply Bayesian data assimilation (DA) approaches, integrating patient-specific TDM data into mathematical models of chemotherapy-induced neutropenia that build on prior population analyses. The value of uncertainty quantification is demonstrated as it allows reliable computation of the patient-specific probabilities of relevant clinical quantities, e.g., the neutropenia grade. In view of novel home monitoring devices that increase the amount of TDM data available, the data processing of
sequential DA methods proves to be more efficient and facilitates handling of the variability between dosing events.
By transferring concepts from DA and RL we develop novel approaches for MIPD. While DA-guided dosing integrates individualized uncertainties into dose selection, RL-guided dosing provides a framework to consider delayed effects of dose selections. The combined
DA-RL approach takes into account both aspects simultaneously and thus represents a holistic approach towards MIPD. Additionally, we show that RL can be used to gain insights into important patient characteristics for dose selection. The novel dosing strategies substantially reduce the occurrence of both subtherapeutic and life-threatening neutropenia grades in a simulation study based on a recent clinical study (CEPAC-TDM trial) compared to currently used MIPD approaches.
If MIPD is to be implemented in routine clinical practice, a certain model bias with respect to the underlying model is inevitable, as the models are typically based on data from comparably small clinical trials that reflect only to a limited extent the diversity in real-world patient populations. We propose a sequential hierarchical Bayesian inference framework that enables continuous cross-patient learning to learn the underlying model parameters of the target patient population. It is important to note that the approach only requires summary information of the individual patient data to update the model. This separation of the individual inference from population inference enables implementation across different centers of care.
The proposed approaches substantially improve current MIPD approaches, taking into account new trends in health care and aspects of practical applicability. They enable progress towards more informed clinical decision-making, ultimately increasing patient benefits beyond the current practice.
Data assimilation algorithms are used to estimate the states of a dynamical system using partial and noisy observations. The ensemble Kalman filter has become a popular data assimilation scheme due to its simplicity and robustness for a wide range of application areas. Nevertheless, this filter also has limitations due to its inherent assumptions of Gaussianity and linearity, which can manifest themselves in the form of dynamically inconsistent state estimates. This issue is investigated here for balanced, slowly evolving solutions to highly oscillatory Hamiltonian systems which are prototypical for applications in numerical weather prediction. It is demonstrated that the standard ensemble Kalman filter can lead to state estimates that do not satisfy the pertinent balance relations and ultimately lead to filter divergence. Two remedies are proposed, one in terms of blended asymptotically consistent time-stepping schemes, and one in terms of minimization-based postprocessing methods. The effects of these modifications to the standard ensemble Kalman filter are discussed and demonstrated numerically for balanced motions of two prototypical Hamiltonian reference systems.