510 Mathematik
Refine
Year of publication
Document Type
- Article (247)
- Preprint (93)
- Doctoral Thesis (74)
- Postprint (29)
- Monograph/Edited Volume (10)
- Other (10)
- Master's Thesis (6)
- Part of a Book (5)
- Conference Proceeding (5)
- Review (3)
Is part of the Bibliography
- yes (486) (remove)
Keywords
- data assimilation (8)
- regularization (8)
- Bayesian inference (7)
- Dirac operator (6)
- Navier-Stokes equations (6)
- cluster expansion (6)
- discrepancy principle (6)
- index (6)
- Cauchy problem (5)
- Fredholm property (5)
Institute
- Institut für Mathematik (423)
- Mathematisch-Naturwissenschaftliche Fakultät (14)
- Institut für Physik und Astronomie (13)
- Extern (9)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (7)
- Institut für Biochemie und Biologie (6)
- Institut für Informatik und Computational Science (5)
- Department Psychologie (4)
- Department Grundschulpädagogik (3)
- Hasso-Plattner-Institut für Digital Engineering GmbH (3)
- Institut für Philosophie (3)
- Strukturbereich Kognitionswissenschaften (3)
- Historisches Institut (2)
- Institut für Geowissenschaften (2)
- Präsident | Vizepräsidenten (2)
- Fachgruppe Politik- & Verwaltungswissenschaft (1)
- Fachgruppe Volkswirtschaftslehre (1)
- Institut für Slavistik (1)
- Interdisziplinäres Zentrum für Dynamik komplexer Systeme (1)
- Juristische Fakultät (1)
- Wirtschaftswissenschaften (1)
We prove finiteness and diameter bounds for graphs having a positive Ricci-curvature bound in the Bakry–Émery sense. Our first result using only curvature and maximal vertex degree is sharp in the case of hypercubes. The second result depends on an additional dimension bound, but is independent of the vertex degree. In particular, the second result is the first Bonnet–Myers type theorem for unbounded graph Laplacians. Moreover, our results improve diameter bounds from Fathi and Shu (Bernoulli 24(1):672–698, 2018) and Horn et al. (J für die reine und angewandte Mathematik (Crelle’s J), 2017, https://doi.org/10.1515/crelle-2017-0038) and solve a conjecture from Cushing et al. (Bakry–Émery curvature functions of graphs, 2016).
We complete the picture how the asymptotic behavior of a dynamical system is reflected by properties of the associated Perron-Frobenius operator. Our main result states that strong convergence of the powers of the Perron-Frobenius operator is equivalent to setwise convergence of the underlying dynamic in the measure algebra. This situation is furthermore characterized by uniform mixing-like properties of the system.
ShapeRotator
(2018)
The quantification of complex morphological patterns typically involves comprehensive shape and size analyses, usually obtained by gathering morphological data from all the structures that capture the phenotypic diversity of an organism or object. Articulated structures are a critical component of overall phenotypic diversity, but data gathered from these structures are difficult to incorporate into modern analyses because of the complexities associated with jointly quantifying 3D shape in multiple structures. While there are existing methods for analyzing shape variation in articulated structures in two-dimensional (2D) space, these methods do not work in 3D, a rapidly growing area of capability and research. Here, we describe a simple geometric rigid rotation approach that removes the effect of random translation and rotation, enabling the morphological analysis of 3D articulated structures. Our method is based on Cartesian coordinates in 3D space, so it can be applied to any morphometric problem that also uses 3D coordinates (e.g., spherical harmonics). We demonstrate the method by applying it to a landmark-based dataset for analyzing shape variation using geometric morphometrics. We have developed an R tool (ShapeRotator) so that the method can be easily implemented in the commonly used R package geomorph and MorphoJ software. This method will be a valuable tool for 3D morphological analyses in articulated structures by allowing an exhaustive examination of shape and size diversity.
We analyze a general class of self-adjoint difference operators H-epsilon = T-epsilon + V-epsilon on l(2)((epsilon Z)(d)), where V-epsilon is a multi-well potential and v(epsilon) is a small parameter. We give a coherent review of our results on tunneling up to new sharp results on the level of complete asymptotic expansions (see [30-35]). Our emphasis is on general ideas and strategy, possibly of interest for a broader range of readers, and less on detailed mathematical proofs. The wells are decoupled by introducing certain Dirichlet operators on regions containing only one potential well. Then the eigenvalue problem for the Hamiltonian H-epsilon is treated as a small perturbation of these comparison problems. After constructing a Finslerian distance d induced by H-epsilon, we show that Dirichlet eigenfunctions decay exponentially with a rate controlled by this distance to the well. It follows with microlocal techniques that the first n eigenvalues of H-epsilon converge to the first n eigenvalues of the direct sum of harmonic oscillators on R-d located at several wells. In a neighborhood of one well, we construct formal asymptotic expansions of WKB-type for eigenfunctions associated with the low-lying eigenvalues of H-epsilon. These are obtained from eigenfunctions or quasimodes for the operator H-epsilon acting on L-2(R-d), via restriction to the lattice (epsilon Z)(d). Tunneling is then described by a certain interaction matrix, similar to the analysis for the Schrodinger operator (see [22]), the remainder is exponentially small and roughly quadratic compared with the interaction matrix. We give weighted l(2)-estimates for the difference of eigenfunctions of Dirichlet-operators in neighborhoods of the different wells and the associated WKB-expansions at the wells. In the last step, we derive full asymptotic expansions for interactions between two "wells" (minima) of the potential energy, in particular for the discrete tunneling effect. Here we essentially use analysis on phase space, complexified in the momentum variable. These results are as sharp as the classical results for the Schrodinger operator in [22].
Rapid population and economic growth in Southeast Asia has been accompanied by extensive land use change with consequent impacts on catchment hydrology. Modeling methodologies capable of handling changing land use conditions are therefore becoming ever more important and are receiving increasing attention from hydrologists. A recently developed data-assimilation-based framework that allows model parameters to vary through time in response to signals of change in observations is considered for a medium-sized catchment (2880 km(2)) in northern Vietnam experiencing substantial but gradual land cover change. We investigate the efficacy of the method as well as the importance of the chosen model structure in ensuring the success of a time-varying parameter method. The method was used with two lumped daily conceptual models (HBV and HyMOD) that gave good-quality streamflow predictions during pre-change conditions. Although both time-varying parameter models gave improved streamflow predictions under changed conditions compared to the time-invariant parameter model, persistent biases for low flows were apparent in the HyMOD case. It was found that HyMOD was not suited to representing the modified baseflow conditions, resulting in extreme and unrealistic time-varying parameter estimates. This work shows that the chosen model can be critical for ensuring the time-varying parameter framework successfully models streamflow under changing land cover conditions. It can also be used to determine whether land cover changes (and not just meteorological factors) contribute to the observed hydrologic changes in retrospective studies where the lack of a paired control catchment precludes such an assessment.
We establish essential steps of an iterative approach to operator algebras, ellipticity and Fredholm property on stratified spaces with singularities of second order. We cover, in particular, corner-degenerate differential operators. Our constructions are focused on the case where no additional conditions of trace and potential type are posed, but this case works well and will be considered in a forthcoming paper as a conclusion of the present calculus.
Earthquake rates are driven by tectonic stress buildup, earthquake-induced stress changes, and transient aseismic processes. Although the origin of the first two sources is known, transient aseismic processes are more difficult to detect. However, the knowledge of the associated changes of the earthquake activity is of great interest, because it might help identify natural aseismic deformation patterns such as slow-slip events, as well as the occurrence of induced seismicity related to human activities. For this goal, we develop a Bayesian approach to identify change-points in seismicity data automatically. Using the Bayes factor, we select a suitable model, estimate possible change-points, and we additionally use a likelihood ratio test to calculate the significance of the change of the intensity. The approach is extended to spatiotemporal data to detect the area in which the changes occur. The method is first applied to synthetic data showing its capability to detect real change-points. Finally, we apply this approach to observational data from Oklahoma and observe statistical significant changes of seismicity in space and time.
Paleoearthquakes and historic earthquakes are the most important source of information for the estimation of long-term earthquake recurrence intervals in fault zones, because corresponding sequences cover more than one seismic cycle. However, these events are often rare, dating uncertainties are enormous, and missing or misinterpreted events lead to additional problems. In the present study, I assume that the time to the next major earthquake depends on the rate of small and intermediate events between the large ones in terms of a clock change model. Mathematically, this leads to a Brownian passage time distribution for recurrence intervals. I take advantage of an earlier finding that under certain assumptions the aperiodicity of this distribution can be related to the Gutenberg-Richter b value, which can be estimated easily from instrumental seismicity in the region under consideration. In this way, both parameters of the Brownian passage time distribution can be attributed with accessible seismological quantities. This allows to reduce the uncertainties in the estimation of the mean recurrence interval, especially for short paleoearthquake sequences and high dating errors. Using a Bayesian framework for parameter estimation results in a statistical model for earthquake recurrence intervals that assimilates in a simple way paleoearthquake sequences and instrumental data. I present illustrative case studies from Southern California and compare the method with the commonly used approach of exponentially distributed recurrence times based on a stationary Poisson process.
The important role that metacognition plays as a predictor for student mathematical learning and for mathematical problem-solving, has been extensively documented. But only recently has attention turned to primary grades, and more research is needed at this level. The goals of this paper are threefold: (1) to present metacognitive framework during mathematics problem-solving, (2) to describe their multi-method interview approach developed to study student mathematical metacognition, and (3) to empirically evaluate the utility of their model and the adaptation of their approach in the context of grade 2 and grade 4 mathematics problem-solving. The results are discussed not only with regard to further development of the adapted multi-method interview approach, but also with regard to their theoretical and practical implications.
This article assesses the distance between the laws of stochastic differential equations with multiplicative Levy noise on path space in terms of their characteristics. The notion of transportation distance on the set of Levy kernels introduced by Kosenkova and Kulik yields a natural and statistically tractable upper bound on the noise sensitivity. This extends recent results for the additive case in terms of coupling distances to the multiplicative case. The strength of this notion is shown in a statistical implementation for simulations and the example of a benchmark time series in paleoclimate.
Frühe mathematische Bildung
(2018)
Im vorliegenden Beitrag werden aktuelle Forschungstrends im Bereich der frühen mathematischen Bildung im Kontext jüngst formulierter Zieldimensionen für die frühe mathematische Bildung (siehe Benz et al., 2017) dargestellt. Es wird auf spielbasierte Fördermaßnahmen, Kompetenzen im Bereich „Raum und Form“, den Einfluss sprachlicher Parameter auf die Entwicklung mathematischer Kompetenzen sowie auf mathematikbezogene Kompetenzen frühpädagogischer Fachkräfte eingegangen. Darüber hinaus werden die Ergebnisse einer aktuellen Feldstudie zur Förderung früher mathematischer Kompetenzen (siehe Dillon, Kannan, Dean, Spelke & Duflo, 2017) vorgestellt. Abschließend wird die Entwicklung und Implementierung anschlussfähiger Bildungskonzepte als eine der zentralen Herausforderungen zukünftiger Forschungs- und Bildungsbemühungen diskutiert
We consider the problem of low rank matrix recovery in a stochastically noisy high-dimensional setting. We propose a new estimator for the low rank matrix, based on the iterative hard thresholding method, that is computationally efficient and simple. We prove that our estimator is optimal in terms of the Frobenius risk and in terms of the entry-wise risk uniformly over any change of orthonormal basis, allowing us to provide the limiting distribution of the estimator. When the design is Gaussian, we prove that the entry-wise bias of the limiting distribution of the estimator is small, which is of interest for constructing tests and confidence sets for low-dimensional subsets of entries of the low rank matrix.
Die Bienaymé-Galton-Watson Prozesse können für die Untersuchung von speziellen und sich entwickelnden Populationen verwendet werden. Die Populationen umfassen Individuen, welche sich identisch, zufällig, selbstständig und unabhängig voneinander fortpflanzen und die jeweils nur eine Generation existieren. Die n-te Generation ergibt sich als zufällige Summe der Individuen der (n-1)-ten Generation. Die Relevanz dieser Prozesse begründet sich innerhalb der Historie und der inner- und außermathematischen Bedeutung. Die Geschichte der Bienaymé-Galton-Watson-Prozesse wird anhand der Entwicklung des Konzeptes bis heute dargestellt. Dabei werden die Wissenschaftler:innen verschiedener Disziplinen angeführt, die Erkenntnisse zu dem Themengebiet beigetragen und das Konzept in ihren Fachbereichen angeführt haben. Somit ergibt sich die außermathematische Signifikanz. Des Weiteren erhält man die innermathematische Bedeutsamkeit mittels des Konzeptes der Verzweigungsprozesse, welches auf die Bienaymé-Galton-Watson Prozesse zurückzuführen ist. Die Verzweigungsprozesse stellen eines der aussagekräftigsten Modelle für die Beschreibung des Populationswachstums dar. Darüber hinaus besteht die derzeitige Wichtigkeit durch die Anwendungsmöglichkeit der Verzweigungsprozesse und der Bienaymé-Galton-Watson Prozesse innerhalb der Epidemiologie. Es werden die Ebola- und die Corona-Pandemie als Anwendungsfelder angeführt. Die Prozesse dienen als Entscheidungsstütze für die Politik und ermöglichen Aussagen über die Auswirkungen von Maßnahmen bezüglich der Pandemien. Neben den Prozessen werden ebenfalls der bedingte Erwartungswert bezüglich diskreter Zufallsvariablen, die wahrscheinlichkeitserzeugende Funktion und die zufällige Summe eingeführt. Die Konzepte vereinfachen die Beschreibung der Prozesse und bilden somit die Grundlage der Betrachtungen. Außerdem werden die benötigten und weiterführenden Eigenschaften der grundlegenden Themengebiete und der Prozesse aufgeführt und bewiesen. Das Kapitel erreicht seinen Höhepunkt bei dem Beweis des Kritikalitätstheorems, wodurch eine Aussage über das Aussterben des Prozesses in verschiedenen Fällen und somit über die Aussterbewahrscheinlichkeit getätigt werden kann. Die Fälle werden anhand der zu erwartenden Anzahl an Nachkommen eines Individuums unterschieden. Es zeigt sich, dass ein Prozess bei einer zu erwartenden Anzahl kleiner gleich Eins mit Sicherheit ausstirbt und bei einer Anzahl größer als Eins, die Population nicht in jedem Fall aussterben muss. Danach werden einzelne Beispiele, wie der linear fractional case, die Population von Fibroblasten (Bindegewebszellen) von Mäusen und die Entstehungsfragestellung der Prozesse, angeführt. Diese werden mithilfe der erlangten Ergebnisse untersucht und einige ausgewählte zufällige Dynamiken werden im nachfolgenden Kapitel simuliert. Die Simulationen erfolgen durch ein in Python erstelltes Programm und werden mithilfe der Inversionsmethode realisiert. Die Simulationen stellen beispielhaft die Entwicklungen in den verschiedenen Kritikalitätsfällen der Prozesse dar. Zudem werden die Häufigkeiten der einzelnen Populationsgrößen in Form von Histogrammen angebracht. Dabei lässt sich der Unterschied zwischen den einzelnen Fällen bestätigen und es wird die Anwendungsmöglichkeit der Bienaymé-Galton-Watson Prozesse bei komplexeren Problemen deutlich. Histogramme bekräftigen, dass die einzelnen Populationsgrößen nur endlich oft vorkommen. Diese Aussage wurde von Galton aufgeworfen und in der Extinktions-Explosions-Dichotomie verwendet. Die dargestellten Erkenntnisse über das Themengebiet und die Betrachtung des Konzeptes werden mit einer didaktischen Analyse abgeschlossen. Die Untersuchung beinhaltet die Berücksichtigung der Fundamentalen Ideen, der Fundamentalen Ideen der Stochastik und der Leitidee „Daten und Zufall“. Dabei ergibt sich, dass in Abhängigkeit der gewählten Perspektive die Anwendung der Bienaymé-Galton-Watson Prozesse innerhalb der Schule plausibel ist und von Vorteil für die Schüler:innen sein kann. Für die Behandlung wird exemplarisch der Rahmenlehrplan für Berlin und Brandenburg analysiert und mit dem Kernlehrplan Nordrhein-Westfalens verglichen. Die Konzeption des Lehrplans aus Berlin und Brandenburg lässt nicht den Schluss zu, dass die Bienaymé-Galton-Watson Prozesse angewendet werden sollten. Es lässt sich feststellen, dass die zugrunde liegende Leitidee nicht vollumfänglich mit manchen Fundamentalen Ideen der Stochastik vereinbar ist. Somit würde eine Modifikation hinsichtlich einer stärkeren Orientierung des Lehrplans an den Fundamentalen Ideen die Anwendung der Prozesse ermöglichen. Die Aussage wird durch die Betrachtung und Übertragung eines nordrhein-westfälischen Unterrichtsentwurfes für stochastische Prozesse auf die Bienaymé-Galton-Watson Prozesse unterstützt. Darüber hinaus werden eine Concept Map und ein Vernetzungspentagraph nach von der Bank konzipiert um diesen Aspekt hervorzuheben.
We consider a statistical inverse learning (also called inverse regression) problem, where we observe the image of a function f through a linear operator A at i.i.d. random design points X-i , superposed with an additive noise. The distribution of the design points is unknown and can be very general. We analyze simultaneously the direct (estimation of Af) and the inverse (estimation of f) learning problems. In this general framework, we obtain strong and weak minimax optimal rates of convergence (as the number of observations n grows large) for a large class of spectral regularization methods over regularity classes defined through appropriate source conditions. This improves on or completes previous results obtained in related settings. The optimality of the obtained rates is shown not only in the exponent in n but also in the explicit dependency of the constant factor in the variance of the noise and the radius of the source condition set.
We study travelling chimera states in a ring of nonlocally coupled heterogeneous (with Lorentzian distribution of natural frequencies) phase oscillators. These states are coherence-incoherence patterns moving in the lateral direction because of the broken reflection symmetry of the coupling topology. To explain the results of direct numerical simulations we consider the continuum limit of the system. In this case travelling chimera states correspond to smooth travelling wave solutions of some integro-differential equation, called the Ott–Antonsen equation, which describes the long time coarse-grained dynamics of the oscillators. Using the Lyapunov–Schmidt reduction technique we suggest a numerical approach for the continuation of these travelling waves. Moreover, we perform their linear stability analysis and show that travelling chimera states can lose their stability via fold and Hopf bifurcations. Some of the Hopf bifurcations turn out to be supercritical resulting in the observation of modulated travelling chimera states.
Given two weighted graphs (X, b(k), m(k)), k = 1, 2 with b(1) similar to b(2) and m(1) similar to m(2), we prove a weighted L-1-criterion for the existence and completeness of the wave operators W-+/- (H-2, H-1, I-1,I-2), where H-k denotes the natural Laplacian in l(2)(X, m(k)) w.r.t. (X, b(k), m(k)) and I-1,I-2 the trivial identification of l(2)(X, m(1)) with l(2) (X, m(2)). In particular, this entails a general criterion for the absolutely continuous spectra of H-1 and H-2 to be equal.
We prove that if u is a locally Lipschitz continuous function on an open set chi subset of Rn + 1 satisfying the nonlinear heat equation partial derivative(t)u = Delta(vertical bar u vertical bar(p-1) u), p > 1, weakly away from the zero set u(-1) (0) in chi, then u is a weak solution to this equation in all of chi.
Spiele und spieltypische Elemente wie das Sammeln von Treuepunkten sind aus dem Alltag kaum wegzudenken. Zudem werden sie zunehmend in Unternehmen oder in Lernumgebungen eingesetzt. Allerdings ist die Methode Gamification bisher für den pädagogischen Kontext wenig klassifiziert und für Lehrende kaum zugänglich gemacht worden.
Daher zielt diese Bachelorarbeit darauf ab, eine systematische Strukturierung und Aufarbeitung von Gamification sowie innovative Ansätze für die Verwendung spieltypischer Elemente im Unterricht, konkret dem Mathematikunterricht, zu präsentieren. Dies kann eine Grundlage für andere Fachgebiete, aber auch andere Lehrformen bieten und so die Umsetzbarkeit von Gamification in eigenen Lehrveranstaltungen aufzeigen.
In der Arbeit wird begründet, weshalb und mithilfe welcher Elemente Gamification die Motivation und Leistungsbereitschaft der Lernenden langfristig erhöhen, die Sozial- und Personalkompetenzen fördern sowie die Lernenden zu mehr Aktivität anregen kann. Zudem wird Gamification explizit mit grundlegenden mathematikdidaktischen Prinzipien in Verbindung gesetzt und somit die Relevanz für den Mathematikunterricht hervorgehoben.
Anschließend werden die einzelnen Elemente von Gamification wie Punkte, Level, Abzeichen, Charaktere und Rahmengeschichte entlang einer eigens für den pädagogischen Kontext entwickelten Klassifikation „FUN“ (Feedback – User specific elements – Neutral elements) schematisch beschrieben, ihre Funktionen und Wirkung dargestellt sowie Einsatzmöglichkeiten im Unterricht aufgezeigt. Dies beinhaltet Ideen zu lernförderlichem Feedback, Differenzierungsmöglichkeiten und Unterrichtsrahmengestaltung, die in Lehrveranstaltungen aller Art umsetzbar sein können. Die Bachelorarbeit umfasst zudem ein spezifisches Beispiel, einen Unterrichtsentwurf einer gamifizierten Mathematikstunde inklusive des zugehörigen Arbeitsmaterials, anhand dessen die Verwendung von Gamification deutlich wird.
Gamification offeriert oftmals Vorteile gegenüber dem traditionellen Unterricht, muss jedoch wie jede Methode an den Inhalt und die Zielgruppe angepasst werden. Weiterführende Forschung könnte sich mit konkreten motivationalen Strukturen, personenspezifischen Unterschieden sowie mit mathematischen Inhalten wie dem Problemlösen oder dem Wechsel zwischen verschiedenen Darstellungen hinsichtlich gamifizierter Lehrformen beschäftigen.
In the last decades, there was a notable progress in solving the well-known Boolean satisfiability (Sat) problem, which can be witnessed by powerful Sat solvers. One of the reasons why these solvers are so fast are structural properties of instances that are utilized by the solver’s interna. This thesis deals with the well-studied structural property treewidth, which measures the closeness of an instance to being a tree. In fact, there are many problems parameterized by treewidth that are solvable in polynomial time in the instance size when parameterized by treewidth.
In this work, we study advanced treewidth-based methods and tools for problems in knowledge representation and reasoning (KR). Thereby, we provide means to establish precise runtime results (upper bounds) for canonical problems relevant to KR. Then, we present a new type of problem reduction, which we call decomposition-guided (DG) that
allows us to precisely monitor the treewidth when reducing from one problem to another problem. This new reduction type will be the basis for a long-open lower bound result for quantified Boolean formulas and allows us to design a new methodology for establishing runtime lower bounds for problems parameterized by treewidth.
Finally, despite these lower bounds, we provide an efficient implementation of algorithms that adhere to treewidth. Our approach finds suitable abstractions of instances, which are subsequently refined in a recursive fashion, and it uses Sat solvers for solving subproblems. It turns out that our resulting solver is quite competitive for two canonical counting problems related to Sat.
This thesis focuses on the study of marked Gibbs point processes, in particular presenting some results on their existence and uniqueness, with ideas and techniques drawn from different areas of statistical mechanics: the entropy method from large deviations theory, cluster expansion and the Kirkwood--Salsburg equations, the Dobrushin contraction principle and disagreement percolation.
We first present an existence result for infinite-volume marked Gibbs point processes. More precisely, we use the so-called entropy method (and large-deviation tools) to construct marked Gibbs point processes in R^d under quite general assumptions. In particular, the random marks belong to a general normed space S and are not bounded. Moreover, we allow for interaction functionals that may be unbounded and whose range is finite but random. The entropy method relies on showing that a family of finite-volume Gibbs point processes belongs to sequentially compact entropy level sets, and is therefore tight.
We then present infinite-dimensional Langevin diffusions, that we put in interaction via a Gibbsian description. In this setting, we are able to adapt the general result above to show the existence of the associated infinite-volume measure. We also study its correlation functions via cluster expansion techniques, and obtain the uniqueness of the Gibbs process for all inverse temperatures β and activities z below a certain threshold. This method relies in first showing that the correlation functions of the process satisfy a so-called Ruelle bound, and then using it to solve a fixed point problem in an appropriate Banach space. The uniqueness domain we obtain consists then of the model parameters z and β for which such a problem has exactly one solution.
Finally, we explore further the question of uniqueness of infinite-volume Gibbs point processes on R^d, in the unmarked setting. We present, in the context of repulsive interactions with a hard-core component, a novel approach to uniqueness by applying the discrete Dobrushin criterion to the continuum framework. We first fix a discretisation parameter a>0 and then study the behaviour of the uniqueness domain as a goes to 0. With this technique we are able to obtain explicit thresholds for the parameters z and β, which we then compare to existing results coming from the different methods of cluster expansion and disagreement percolation.
Throughout this thesis, we illustrate our theoretical results with various examples both from classical statistical mechanics and stochastic geometry.
In the present paper, we study the problem of existence of honest and adaptive confidence sets for matrix completion. We consider two statistical models: the trace regression model and the Bernoulli model. In the trace regression model, we show that honest confidence sets that adapt to the unknown rank of the matrix exist even when the error variance is unknown. Contrary to this, we prove that in the Bernoulli model, honest and adaptive confidence sets exist only when the error variance is known a priori. In the course of our proofs, we obtain bounds for the minimax rates of certain composite hypothesis testing problems arising in low rank inference.
We analyze a general class of difference operators Hε=Tε+Vε on ℓ2((εZ)d), where Vε is a multi-well potential and ε is a small parameter. We derive full asymptotic expansions of the prefactor of the exponentially small eigenvalue splitting due to interactions between two “wells” (minima) of the potential energy, i.e., for the discrete tunneling effect. We treat both the case where there is a single minimal geodesic (with respect to the natural Finsler metric induced by the leading symbol h0(x,ξ) of Hε) connecting the two minima and the case where the minimal geodesics form an ℓ+1 dimensional manifold, ℓ≥1. These results on the tunneling problem are as sharp as the classical results for the Schrödinger operator in Helffer and Sjöstrand (Commun PDE 9:337–408, 1984). Technically, our approach is pseudo-differential and we adapt techniques from Helffer and Sjöstrand [Analyse semi-classique pour l’équation de Harper (avec application à l’équation de Schrödinger avec champ magnétique), Mémoires de la S.M.F., 2 series, tome 34, pp 1–113, 1988)] and Helffer and Parisse (Ann Inst Henri Poincaré 60(2):147–187, 1994) to our discrete setting.
Contributions to the theoretical analysis of the algorithms with adversarial and dependent data
(2021)
In this work I present the concentration inequalities of Bernstein's type for the norms of Banach-valued random sums under a general functional weak-dependency assumption (the so-called $\cC-$mixing). The latter is then used to prove, in the asymptotic framework, excess risk upper bounds of the regularised Hilbert valued statistical learning rules under the τ-mixing assumption on the underlying training sample. These results (of the batch statistical setting) are then supplemented with the regret analysis over the classes of Sobolev balls of the type of kernel ridge regression algorithm in the setting of online nonparametric regression with arbitrary data sequences. Here, in particular, a question of robustness of the kernel-based forecaster is investigated. Afterwards, in the framework of sequential learning, the multi-armed bandit problem under $\cC-$mixing assumption on the arm's outputs is considered and the complete regret analysis of a version of Improved UCB algorithm is given. Lastly, probabilistic inequalities of the first part are extended to the case of deviations (both of Azuma-Hoeffding's and of Burkholder's type) to the partial sums of real-valued weakly dependent random fields (under the type of projective dependence condition).
One of the crucial components in seismic hazard analysis is the estimation of the maximum earthquake magnitude and associated uncertainty. In the present study, the uncertainty related to the maximum expected magnitude mu is determined in terms of confidence intervals for an imposed level of confidence. Previous work by Salamat et al. (Pure Appl Geophys 174:763-777, 2017) shows the divergence of the confidence interval of the maximum possible magnitude m(max) for high levels of confidence in six seismotectonic zones of Iran. In this work, the maximum expected earthquake magnitude mu is calculated in a predefined finite time interval and imposed level of confidence. For this, we use a conceptual model based on a doubly truncated Gutenberg-Richter law for magnitudes with constant b-value and calculate the posterior distribution of mu for the time interval T-f in future. We assume a stationary Poisson process in time and a Gutenberg-Richter relation for magnitudes. The upper bound of the magnitude confidence interval is calculated for different time intervals of 30, 50, and 100 years and imposed levels of confidence alpha = 0.5, 0.1, 0.05, and 0.01. The posterior distribution of waiting times T-f to the next earthquake with a given magnitude equal to 6.5, 7.0, and7.5 are calculated in each zone. In order to find the influence of declustering, we use the original and declustered version of the catalog. The earthquake catalog of the territory of Iran and surroundings are subdivided into six seismotectonic zones Alborz, Azerbaijan, Central Iran, Zagros, Kopet Dagh, and Makran. We assume the maximum possible magnitude m(max) = 8.5 and calculate the upper bound of the confidence interval of mu in each zone. The results indicate that for short time intervals equal to 30 and 50 years and imposed levels of confidence 1 - alpha = 0.95 and 0.90, the probability distribution of mu is around mu = 7.16-8.23 in all seismic zones.
We study the Volterra property of a class of anisotropic pseudo-differential operators on R x B for a manifold B with edge Y and time-variable t. This exposition belongs to a program for studying parabolicity in such a situation. In the present consideration we establish non-smoothing elements in a subalgebra with anisotropic operator-valued symbols of Mellin type with holomorphic symbols in the complex Mellin covariable from the cone theory, where the covariable t of t extends to symbolswith respect to t to the lower complex v half-plane. The resulting space ofVolterra operators enlarges an approach of Buchholz (Parabolische Pseudodifferentialoperatoren mit operatorwertigen Symbolen. Ph. D. thesis, Universitat Potsdam, 1996) by necessary elements to a new operator algebra containing Volterra parametrices under an appropriate condition of anisotropic ellipticity. Our approach avoids some difficulty in choosing Volterra quantizations in the edge case by generalizing specific achievements from the isotropic edge-calculus, obtained by Seiler (Pseudodifferential calculus on manifolds with non-compact edges, Ph. D. thesis, University of Potsdam, 1997), see also Gil et al. (in: Demuth et al (eds) Mathematical research, vol 100. Akademic Verlag, Berlin, pp 113-137, 1997; Osaka J Math 37: 221-260, 2000).
If (T-t) is a semigroup of Markov operators on an L-1-space that admits a nontrivial lower bound, then a well-known theorem of Lasota and Yorke asserts that the semigroup is strongly convergent as t -> infinity. In this article we generalize and improve this result in several respects. First, we give a new and very simple proof for the fact that the same conclusion also holds if the semigroup is merely assumed to be bounded instead of Markov. As a main result, we then prove a version of this theorem for semigroups which only admit certain individual lower bounds. Moreover, we generalize a theorem of Ding on semigroups of Frobenius-Perron operators. We also demonstrate how our results can be adapted to the setting of general Banach lattices and we give some counterexamples to show optimality of our results. Our methods combine some rather concrete estimates and approximation arguments with abstract functional analytical tools. One of these tools is a theorem which relates the convergence of a time-continuous operator semigroup to the convergence of embedded discrete semigroups.
Im Zuge der Covid-19 Pandemie werden zwei Werte täglich diskutiert: Die zuletzt gemeldete Zahl der neu Infizierten und die sogenannte Reproduktionsrate. Sie gibt wieder, wie viele weitere Menschen ein an Corona erkranktes Individuum im Durchschnitt ansteckt. Für die Schätzung dieses Wertes gibt es viele Möglichkeiten - auch das Robert Koch-Institut gibt in seinem täglichen Situationsbericht stets zwei R-Werte an: Einen 4-Tage-R-Wert und einen weniger schwankenden 7-Tage-R-Wert. Diese Arbeit soll eine weitere Möglichkeit vorstellen, einige Aspekte der Pandemie zu modellieren und die Reproduktionsrate zu schätzen.
In der ersten Hälfte der Arbeit werden die mathematischen Grundlagen vorgestellt, die man für die Modellierung benötigt. Hierbei wird davon ausgegangen, dass der Leser bereits ein Basisverständnis von stochastischen Prozessen hat. Im Abschnitt Grundlagen werden Verzweigungsprozesse mit einigen Beispielen eingeführt und die Ergebnisse aus diesem Themengebiet, die für diese Arbeit wichtig sind, präsentiert. Dabei gehen wir zuerst auf einfache Verzweigungsprozesse ein und erweitern diese dann auf Verzweigungsprozesse mit mehreren Typen. Um die Notation zu erleichtern, beschränken wir uns auf zwei Typen. Das Prinzip lässt sich aber auf eine beliebige Anzahl von Typen erweitern.
Vor allem soll die Wichtigkeit des Parameters λ herausgestellt werden. Dieser Wert kann als durchschnittliche Zahl von Nachfahren eines Individuums interpretiert werden und bestimmt die Dynamik des Prozesses über einen längeren Zeitraum. In der Anwendung auf die Pandemie hat der Parameter λ die gleiche Rolle wie die Reproduktionsrate R.
In der zweiten Hälfte dieser Arbeit stellen wir eine Anwendung der Theorie über Multitype Verzweigungsprozesse vor. Professor Yanev und seine Mitarbeiter modellieren in ihrer Veröffentlichung Branching stochastic processes as models of Covid-19 epidemic development die Ausbreitung des Corona Virus' über einen Verzweigungsprozess mit zwei Typen. Wir werden dieses Modell diskutieren und Schätzer daraus ableiten: Ziel ist es, die Reproduktionsrate zu ermitteln. Außerdem analysieren wir die Möglichkeiten, die Dunkelziffer (die Zahl nicht gemeldeter Krankheitsfälle) zu schätzen. Wir wenden die Schätzer auf die Zahlen von Deutschland an und werten diese schließlich aus.
The size structure of autotroph communities - the relative abundance of small vs. large individuals - shapes the functioning of ecosystems. Whether common mechanisms underpin the size structure of unicellular and multicellular autotrophs is, however, unknown. Using a global data compilation, we show that individual body masses in tree and phytoplankton communities follow power-law distributions and that the average exponents of these individual size distributions (ISD) differ. Phytoplankton communities are characterized by an average ISD exponent consistent with three-quarter-power scaling of metabolism with body mass and equivalence in energy use among mass classes. Tree communities deviate from this pattern in a manner consistent with equivalence in energy use among diameter size classes. Our findings suggest that whilst universal metabolic constraints ultimately underlie the emergent size structure of autotroph communities, divergent aspects of body size (volumetric vs. linear dimensions) shape the ecological outcome of metabolic scaling in forest vs. pelagic ecosystems.
Let (M-i, g(i))(i is an element of N) be a sequence of spin manifolds with uniform bounded curvature and diameter that converges to a lower-dimensional Riemannian manifold (B, h) in the Gromov-Hausdorff topology. Then, it happens that the spectrum of the Dirac operator converges to the spectrum of a certain first-order elliptic differential operator D-B on B. We give an explicit description of D-B and characterize the special case where D-B equals the Dirac operator on B.
We show that elliptic complexes of (pseudo) differential operators on smooth compact manifolds with boundary can always be complemented to a Fredholm problem by boundary conditions involving global pseudodifferential projections on the boundary (similarly as the spectral boundary conditions of Atiyah, Patodi, and Singer for a single operator). We prove that boundary conditions without projections can be chosen if, and only if, the topological Atiyah-Bott obstruction vanishes. These results make use of a Fredholm theory for complexes of operators in algebras of generalized pseudodifferential operators of Toeplitz type which we also develop in the present paper.
We continue our study of invariant forms of the classical equations of mathematical physics, such as the Maxwell equations or the Lam´e system, on manifold with boundary. To this end we interpret them in terms of the de Rham complex at a certain step. On using the structure of the complex we get an insight to predict a degeneracy deeply encoded in the equations. In the present paper we develop an invariant approach to the classical Navier-Stokes equations.
Many machine learning problems can be characterized by mutual contamination models. In these problems, one observes several random samples from different convex combinations of a set of unknown base distributions and the goal is to infer these base distributions. This paper considers the general setting where the base distributions are defined on arbitrary probability spaces. We examine three popular machine learning problems that arise in this general setting: multiclass classification with label noise, demixing of mixed membership models, and classification with partial labels. In each case, we give sufficient conditions for identifiability and present algorithms for the infinite and finite sample settings, with associated performance guarantees.
In this paper we develop a general framework for constructing and analyzing coupled Markov chain Monte Carlo samplers, allowing for both (possibly degenerate) diffusion and piecewise deterministic Markov processes. For many performance criteria of interest, including the asymptotic variance, the task of finding efficient couplings can be phrased in terms of problems related to optimal transport theory. We investigate general structural properties, proving a singularity theorem that has both geometric and probabilistic interpretations. Moreover, we show that those problems can often be solved approximately and support our findings with numerical experiments. For the particular objective of estimating the variance of a Bayesian posterior, our analysis suggests using novel techniques in the spirit of antithetic variates. Addressing the convergence to equilibrium of coupled processes we furthermore derive a modified Poincare inequality.
Packungen aus Kreisscheiben
(2019)
Der englische Seefahrer Sir Walter Raleigh fragte sich einst, wie er in seinem Schiffsladeraum moeglichst viele Kanonenkugeln stapeln koennte. Johannes Kepler entwickelte daraufhin 1611 eine Vermutung ueber die optimale Anordnung der Kugeln. Diese Vermutung sollte sich als eine der haertesten mathematischen Nuesse der Geschichte erweisen. Selbst in der Ebene sind dichteste Packungen kongruenter Kreise eine Herausforderung. 1892 und 1910 veroeffentlichte Axel Thue (kritisierte) Beweise, dass die hexagonale Kreispackung optimal sei. Erst 1940 lieferte Laszlo Fejes Toth schliesslich einen wasserdichten Beweis fuer diese Tatsache. Eine Variante des Problems verlangt,
Packungen mit endlich vielen kongruenten Kugeln zu finden, die eine gewisse quadratische Energie minimieren: Diese spannende geometrische Aufgabe wurde 1967 von Toth gestellt. Sie ist auch heute noch nicht vollstaendig gelaest. In diesem Beitrag schlagen die Autorinnen eine originelle wahrscheinlichkeitstheoretische Methode vor, um in der Ebene Näherungen der Lösung zu konstruieren.
We provide explicit examples of positive and power-bounded operators on c(0) and l(infinity) which are mean ergodic but not weakly almost periodic. As a consequence we prove that a countably order complete Banach lattice on which every positive and power-bounded mean ergodic operator is weakly almost periodic is necessarily a KB-space. This answers several open questions from the literature. Finally, we prove that if T is a positive mean ergodic operator with zero fixed space on an arbitrary Banach lattice, then so is every power of T .
For a singularly perturbed parabolic - ODE system we construct the asymptotic expansion in the small parameter in the case, when the degenerate equation has a double root. Such systems, which are called partly dissipative reaction-diffusion systems, are used to model various natural processes, including the signal transmission along axons, solid combustion and the kinetics of some chemical reactions. It turns out that the algorithm of the construction of the boundary layer functions and the behavior of the solution in the boundary layers essentially differ from that ones in case of a simple root. The multizonal initial and boundary layers behaviour was stated.
We discuss canonical representations of the de Rham cohomology on a compact manifold with boundary. They are obtained by minimising the energy integral in a Hilbert space of differential forms that belong along with the exterior derivative to the domain of the adjoint operator. The corresponding Euler-Lagrange equations reduce to an elliptic boundary value problem on the manifold, which is usually referred to as the Neumann problem after Spencer.
Data assimilation
(2019)
Data assimilation addresses the general problem of how to combine model-based predictions with partial and noisy observations of the process in an optimal manner. This survey focuses on sequential data assimilation techniques using probabilistic particle-based algorithms. In addition to surveying recent developments for discrete- and continuous-time data assimilation, both in terms of mathematical foundations and algorithmic implementations, we also provide a unifying framework from the perspective of coupling of measures, and Schrödinger’s boundary value problem for stochastic processes in particular.
By adapting the Cheeger-Simons approach to differential cohomology, we establish a notion of differential cohomology with compact support. We show that it is functorial with respect to open embeddings and that it fits into a natural diagram of exact sequences which compare it to compactly supported singular cohomology and differential forms with compact support, in full analogy to ordinary differential cohomology. We prove an excision theorem for differential cohomology using a suitable relative version. Furthermore, we use our model to give an independent proof of Pontryagin duality for differential cohomology recovering a result of [Harvey, Lawson, Zweck - Amer. J. Math. 125 (2003), 791]: On any oriented manifold, ordinary differential cohomology is isomorphic to the smooth Pontryagin dual of compactly supported differential cohomology. For manifolds of finite-type, a similar result is obtained interchanging ordinary with compactly supported differential cohomology.
Die Vielfältigkeit des Winkelbegriffs ist gleichermaßen spannend wie herausfordernd in Hinblick auf seine Zugänge im Mathematikunterricht der Schule. Ausgehend von verschiedenen Vorstellungen zum Winkelbegriff wird in dieser Arbeit ein Lehrgang zur Vermittlung des Winkelbegriffs entwickelt und letztlich in konkrete Umsetzungen für den Schulunterricht überführt.
Dabei erfolgt zunächst eine stoffdidaktische Auseinandersetzung mit dem Winkelbegriff, die von einer informationstheoretischen Winkeldefinition begleitet wird. In dieser wird eine Definition für den Winkelbegriff unter der Fragestellung entwickelt, welche Informationen man über einen Winkel benötigt, um ihn beschreiben zu können. So können die in der fachdidaktischen Literatur auftretenden Winkelvorstellungen aus fachmathematischer Perspektive erneut abgeleitet und validiert werden. Parallel dazu wird ein Verfahren beschrieben, wie Winkel – auch unter dynamischen Aspekten – informationstechnisch verarbeitet werden können, so dass Schlussfolgerungen aus der informationstheoretischen Winkeldefinition beispielsweise in dynamischen Geometriesystemen zur Verfügung stehen.
Unter dem Gesichtspunkt, wie eine Abstraktion des Winkelbegriffs im Mathematikunterricht vonstatten gehen kann, werden die Grundvorstellungsidee sowie die Lehrstrategie des Aufsteigens vom Abstrakten zum Konkreten miteinander in Beziehung gesetzt. Aus der Verknüpfung der beiden Theorien wird ein grundsätzlicher Weg abgeleitet, wie im Rahmen der Lehrstrategie eine Ausgangsabstraktion zu einzelnen Winkelaspekten aufgebaut werden kann, was die Generierung von Grundvorstellungen zu den Bestandteilen des jeweiligen Winkelaspekts und zum Operieren mit diesen Begriffsbestandteilen ermöglichen soll. Hierfür wird die Lehrstrategie angepasst, um insbesondere den Übergang von Winkelsituationen zu Winkelkontexten zu realisieren. Explizit für den Aspekt des Winkelfeldes werden, anhand der Untersuchung der Sichtfelder von Tieren, Lernhandlungen und Forderungen an ein Lernmodell beschrieben, die Schülerinnen und Schüler bei der Begriffsaneignung unterstützen.
Die Tätigkeitstheorie, der die genannte Lehrstrategie zuzuordnen ist, zieht sich als roter Faden durch die weitere Arbeit, wenn nun theoriebasiert Designprinzipien generiert werden, die in die Entwicklung einer interaktiven Lernumgebung münden. Hierzu wird u. a. das Modell der Artifact-Centric Activity Theory genutzt, das das Beziehungsgefüge aus Schülerinnen und Schülern, dem mathematischen Gegenstand und einer zu entwickelnden App als vermittelndes Medium beschreibt, wobei der Einsatz der App im Unterrichtskontext sowie deren regelgeleitete Entwicklung Bestandteil des Modells sind. Gemäß dem Ansatz der Fachdidaktischen Entwicklungsforschung wird die Lernumgebung anschließend in mehreren Zyklen erprobt, evaluiert und überarbeitet. Dabei wird ein qualitatives Setting angewandt, das sich der Semiotischen Vermittlung bedient und untersucht, inwiefern sich die Qualität der von den Schülerinnen und Schülern gezeigten Lernhandlungen durch die Designprinzipien und deren Umsetzung erklären lässt. Am Ende der Arbeit stehen eine finale Version der Designprinzipien und eine sich daraus ergebende Lernumgebung zur Einführung des Winkelfeldbegriffs in der vierten Klassenstufe.
Permafrost warming has the potential to amplify global climate change, because when frozen sediments thaw it unlocks soil organic carbon. Yet to date, no globally consistent assessment of permafrost temperature change has been compiled. Here we use a global data set of permafrost temperature time series from the Global Terrestrial Network for Permafrost to evaluate temperature change across permafrost regions for the period since the International Polar Year (2007-2009). During the reference decade between 2007 and 2016, ground temperature near the depth of zero annual amplitude in the continuous permafrost zone increased by 0.39 +/- 0.15 degrees C. Over the same period, discontinuous permafrost warmed by 0.20 +/- 0.10 degrees C. Permafrost in mountains warmed by 0.19 +/- 0.05 degrees C and in Antarctica by 0.37 +/- 0.10 degrees C. Globally, permafrost temperature increased by 0.29 +/- 0.12 degrees C. The observed trend follows the Arctic amplification of air temperature increase in the Northern Hemisphere. In the discontinuous zone, however, ground warming occurred due to increased snow thickness while air temperature remained statistically unchanged.
For a finite measure space X, we characterize strongly continuous Markov lattice semigroups on Lp(X) by showing that their generator A acts as a derivation on the dense subspace D(A)L(X). We then use this to characterize Koopman semigroups on Lp(X) if X is a standard probability space. In addition, we show that every measurable and measure-preserving flow on a standard probability space is isomorphic to a continuous flow on a compact Borel probability space.
The propagation of test fields, such as electromagnetic, Dirac or linearized gravity, on a fixed spacetime manifold is often studied by using the geometrical optics approximation. In the limit of infinitely high frequencies, the geometrical optics approximation provides a conceptual transition between the test field and an effective point-particle description. The corresponding point-particles, or wave rays, coincide with the geodesics of the underlying spacetime. For most astrophysical applications of interest, such as the observation of celestial bodies, gravitational lensing, or the observation of cosmic rays, the geometrical optics approximation and the effective point-particle description represent a satisfactory theoretical model. However, the geometrical optics approximation gradually breaks down as test fields of finite frequency are considered.
In this thesis, we consider the propagation of test fields on spacetime, beyond the leading-order geometrical optics approximation. By performing a covariant Wentzel-Kramers-Brillouin analysis for test fields, we show how higher-order corrections to the geometrical optics approximation can be considered. The higher-order corrections are related to the dynamics of the spin internal degree of freedom of the considered test field. We obtain an effective point-particle description, which contains spin-dependent corrections to the geodesic motion obtained using geometrical optics. This represents a covariant generalization of the well-known spin Hall effect, usually encountered in condensed matter physics and in optics. Our analysis is applied to electromagnetic and massive Dirac test fields, but it can easily be extended to other fields, such as linearized gravity. In the electromagnetic case, we present several examples where the gravitational spin Hall effect of light plays an important role. These include the propagation of polarized light rays on black hole spacetimes and cosmological spacetimes, as well as polarization-dependent effects on the shape of black hole shadows. Furthermore, we show that our effective point-particle equations for polarized light rays reproduce well-known results, such as the spin Hall effect of light in an inhomogeneous medium, and the relativistic Hall effect of polarized electromagnetic wave packets encountered in Minkowski spacetime.
We study the spectral properties of curl, a linear differential operator of first order acting on differential forms of appropriate degree on an odd-dimensional closed oriented Riemannian manifold. In three dimensions, its eigenvalues are the electromagnetic oscillation frequencies in vacuum without external sources. In general, the spectrum consists of the eigenvalue 0 with infinite multiplicity and further real discrete eigenvalues of finite multiplicity. We compute the Weyl asymptotics and study the zeta-function. We give a sharp lower eigenvalue bound for positively curved manifolds and analyze the equality case. Finally, we compute the spectrum for flat tori, round spheres, and 3-dimensional spherical space forms. Published under license by AIP Publishing.
Background: Circulating infliximab (IFX) concentrations correlate with clinical outcomes, forming the basis of the IFX concentration monitoring in patients with Crohn's disease. This study aims to investigate and refine the exposure-response relationship by linking the disease activity markers "Crohn's disease activity index" (CDAI) and C-reactive protein (CRP) to IFX exposure. In addition, we aim to explore the correlations between different disease markers and exposure metrics.
Methods: Data from 47 Crohn's disease patients of a randomized controlled trial were analyzed post hoc. All patients had secondary treatment failure at inclusion and had received intensified IFX of 5 mg/kg every 4 weeks for up to 20 weeks. Graphical analyses were performed to explore exposure-response relationships. Metrics of exposure included area under the concentration-time curve (AUC) and trough concentrations (Cmin). Disease activity was measured by CDAI and CRP values, their change from baseline/last visit, and response/remission outcomes at week 12.
Results: Although trends toward lower Cmin and lower AUC in nonresponders were observed, neither CDAI nor CRP showed consistent trends of lower disease activity with higher IFX exposure across the 30 evaluated relationships. As can be expected, Cmin and AUC were strongly correlated with each other. Contrarily, the disease activity markers were only weakly correlated with each other.
Conclusions: No significant relationship between disease activity, as evaluated by CDAI or CRP, and IFX exposure was identified. AUC did not add benefit compared with Cmin. These findings support the continued use of Cmin and call for stringent objective disease activity (bio-)markers (eg, endoscopy) to form the basis of personalized IFX therapy for Crohn's disease patients with IFX treatment failure.
Entdeckendes Lernen
(2017)
Trotz der nachweislichen Popularität des Entdeckenden Lernens in der deutschsprachigen Mathematikdidaktik finden sich aktuell keine kritischen Beiträge, die dazu beitragen könnten, dieses grundlegende Unterrichtskonzept zu hinterfragen und auszuschärfen. In diesem Diskussionsbeitrag werden zunächst die Theorie und einige Umsetzungsbeispiele des Entdeckenden Lernens herausgearbeitet, um aufzuzeigen, dass das Entdeckende Lernen einem vagen Sammelbegriff gleicht, unter dem oft fragwürdige Unterrichtsumgebungen legitimiert werden. Anschließend werden an Hand erkenntnistheoretischer, lerntheoretischer, didaktischer und soziokultureller Betrachtungen Probleme des Entdeckenden Lernens im Mathematikunterricht und Möglichkeiten ihrer Überwindung thematisiert. Dabei zeigt sich, dass die Konzeption des Entdeckenden Lernens hinter dem aktuellen mathematikdidaktischen Erkenntnisstand zurückfällt und Lehrer sowie Schüler mit unmöglichen Forderungen konfrontiert, dass lerntheoretische Vorteile des Entdeckenden Lernens oft nicht nachweisbar sind, dass die Idee des Entdeckens auf einem problematischen platonistischen Verständnis von Erkenntnis beruht und dass Entdeckendes Lernen bildungsferne Schüler zu benachteiligen droht. Abschließend werden Forschungsdesiderata abgeleitet, deren Bearbeitung dazu beitragen könnte, die aufgezeigten Problemfelder zu überwinden.
The success of the ensemble Kalman filter has triggered a strong interest in expanding its scope beyond classical state estimation problems. In this paper, we focus on continuous-time data assimilation where the model and measurement errors are correlated and both states and parameters need to be identified. Such scenarios arise from noisy and partial observations of Lagrangian particles which move under a stochastic velocity field involving unknown parameters. We take an appropriate class of McKean-Vlasov equations as the starting point to derive ensemble Kalman-Bucy filter algorithms for combined state and parameter estimation. We demonstrate their performance through a series of increasingly complex multi-scale model systems.
A term, also called a tree, is said to be linear, if each variable occurs in the term only once. The linear terms and sets of linear terms, the so-called linear tree languages, play some role in automata theory and in the theory of formal languages in connection with recognizability. We define a partial superposition operation on sets of linear trees of a given type and study the properties of some many-sorted partial clones that have sets of linear trees as elements and partial superposition operations as fundamental operations. The endomorphisms of those algebras correspond to nondeterministic linear hypersubstitutions.
An efficient immunosurveillance of CD8(+) T cells in the periphery depends on positive/negative selection of thymocytes and thus on the dynamics of antigen degradation and epitope production by thymoproteasome and immunoproteasome in the thymus. Although studies in mouse systems have shown how thymoproteasome activity differs from that of immunoproteasome and strongly impacts the T cell repertoire, the proteolytic dynamics and the regulation of human thymoproteasome are unknown. By combining biochemical and computational modeling approaches, we show here that human 20S thymoproteasome and immunoproteasome differ not only in the proteolytic activity of the catalytic sites but also in the peptide transport. These differences impinge upon the quantity of peptide products rather than where the substrates are cleaved. The comparison of the two human 20S proteasome isoforms depicts different processing of antigens that are associated to tumors and autoimmune diseases.
We study corner-degenerate pseudo-differential operators of any singularity order and develop ellipticity based on the principal symbolic hierarchy, associated with the stratification of the underlying space. We construct parametrices within the calculus and discuss the aspect of additional trace and potential conditions along lower-dimensional strata.
For n∈N , let Xn={a1,a2,…,an} be an n-element set and let F=(Xn;<f) be a fence, also called a zigzag poset. As usual, we denote by In the symmetric inverse semigroup on Xn. We say that a transformation α∈In is fence-preserving if x<fy implies that xα<fyα, for all x,y in the domain of α. In this paper, we study the semigroup PFIn of all partial fence-preserving injections of Xn and its subsemigroup IFn={α∈PFIn:α−1∈PFIn}. Clearly, IFn is an inverse semigroup and contains all regular elements of PFIn. We characterize the Green’s relations for the semigroup IFn. Further, we prove that the semigroup IFn is generated by its elements with rank ≥n−2. Moreover, for n∈2N, we find the least generating set and calculate the rank of IFn.
Fractures serve as highly conductive preferential flow paths for fluids in rocks, which are difficult to exactly reconstruct in numerical models. Especially, in low-conductive rocks, fractures are often the only pathways for advection of solutes and heat. The presented study compares the results from hydraulic and tracer tomography applied to invert a theoretical discrete fracture network (DFN) that is based on data from synthetic cross-well testing. For hydraulic tomography, pressure pulses in various injection intervals are induced and the pressure responses in the monitoring intervals of a nearby observation well are recorded. For tracer tomography, a conservative tracer is injected in different well levels and the depth-dependent breakthrough of the tracer is monitored. A recently introduced transdimensional Bayesian inversion procedure is applied for both tomographical methods, which adjusts the fracture positions, orientations, and numbers based on given geometrical fracture statistics. The used Metropolis-Hastings-Green algorithm is refined by the simultaneous estimation of the measurement error’s variance, that is, the measurement noise. Based on the presented application to invert the two-dimensional cross-section between source and the receiver well, the hydraulic tomography reveals itself to be more suitable for reconstructing the original DFN. This is based on a probabilistic representation of the inverted results by means of fracture probabilities.
We study the mathematical structure underlying the concept of locality which lies at the heart of classical and quantum field theory, and develop a machinery used to preserve locality during the renormalisation procedure. Viewing renormalisation in the framework of Connes and Kreimer as the algebraic Birkhoff factorisation of characters on a Hopf algebra with values in a Rota-Baxter algebra, we build locality variants of these algebraic structures, leading to a locality variant of the algebraic Birkhoff factorisation. This provides an algebraic formulation of the conservation of locality while renormalising. As an application in the context of the Euler-Maclaurin formula on lattice cones, we renormalise the exponential generating function which sums over the lattice points in a lattice cone. As a consequence, for a suitable multivariate regularisation, renormalisation from the algebraic Birkhoff factorisation amounts to composition by a projection onto holomorphic multivariate germs.
A term t is linear if no variable occurs more than once in t. An identity s ≈ t is said to be linear if s and t are linear terms. Identities are particular formulas. As for terms superposition operations can be defined for formulas too. We define the arbitrary linear formulas and seek for a condition for the set of all linear formulas to be closed under superposition. This will be used to define the partial superposition operations on the set of linear formulas and a partial many-sorted algebra Formclonelin(τ, τ′). This algebra has similar properties with the partial many-sorted clone of all linear terms. We extend the concept of a hypersubstitution of type τ to the linear hypersubstitutions of type (τ, τ′) for algebraic systems. The extensions of linear hypersubstitutions of type (τ, τ′) send linear formulas to linear formulas, presenting weak endomorphisms of Formclonelin(τ, τ′).
On a smooth complete Riemannian spin manifold with smooth compact boundary, we demonstrate that Atiyah-Singer Dirac operator in depends Riesz continuously on perturbations of local boundary conditions The Lipschitz bound for the map depends on Lipschitz smoothness and ellipticity of and bounds on Ricci curvature and its first derivatives as well as a lower bound on injectivity radius away from a compact neighbourhood of the boundary. More generally, we prove perturbation estimates for functional calculi of elliptic operators on manifolds with local boundary conditions.
This paper studies the effects of two different frames on decisions in a dictator game. Before making their allocation decision, dictators read a short text. Depending on the treatment, the text either emphasizes their decision power and freedom of choice or it stresses their responsibility for the receiver’s payoff. Including a control treatment without such a text, three treatments are conducted with a total of 207 dictators. Our results show a different reaction to these texts depending on the dictator’s gender. We find that only men react positively to a text that stresses their responsibility for the receiver, while only women seem to react positively to a text that emphasizes their decision power and freedom of choice.
In this paper, we present the convergence rate analysis of the modified Landweber method under logarithmic source condition for nonlinear ill-posed problems. The regularization parameter is chosen according to the discrepancy principle. The reconstructions of the shape of an unknown domain for an inverse potential problem by using the modified Landweber method are exhibited.
This paper presents a scalable E-band radar platform based on single-channel fully integrated transceivers (TRX) manufactured using 130-nm silicon-germanium (SiGe) BiCMOS technology. The TRX is suitable for flexible radar systems exploiting massive multiple-input-multipleoutput (MIMO) techniques for multidimensional sensing. A fully integrated fractional-N phase-locked loop (PLL) comprising a 39.5-GHz voltage-controlled oscillator is used to generate wideband frequency-modulated continuous-wave (FMCW) chirp for E-band radar front ends. The TRX is equipped with a vector modulator (VM) for high-speed carrier modulation and beam-forming techniques. A single TRX achieves 19.2-dBm maximum output power and 27.5-dB total conversion gain with input-referred 1-dB compression point of -10 dBm. It consumes 220 mA from 3.3-V supply and occupies 3.96 mm(2) silicon area. A two-channel radar platform based on full-custom TRXs and PLL was fabricated to demonstrate high-precision and high-resolution FMCW sensing. The radar enables up to 10-GHz frequency ramp generation in 74-84-GHz range, which results in 1.5-cm spatial resolution. Due to high output power, thus high signal-to-noise ratio (SNR), a ranging precision of 7.5 mu m for a target at 2 m was achieved. The proposed architecture supports scalable multichannel applications for automotive FMCW using a single local oscillator (LO).
A zig-zag (or fence) order is a special partial order on a (finite) set. In this paper, we consider the semigroup TFn of all order-preserving transformations on an n-element zig-zag-ordered set. We determine the rank of TFn and provide a minimal generating set for TFn. Moreover, a formula for the number of idempotents in TFn is given.
We prove a version of the Hopf-Rinow theorem with respect to path metrics on discrete spaces. The novel aspect is that we do not a priori assume local finiteness but isolate a local finiteness type condition, called essentially locally finite, that is indeed necessary. As a side product we identify the maximal weight, called the geodesic weight, generating the path metric in the situation when the space is complete with respect to any of the equivalent notions of completeness proven in the Hopf-Rinow theorem. As an application we characterize the graphs for which the resistance metric is a path metric induced by the graph structure.
We study elements of the calculus of boundary value problems in a variant of Boutet de Monvel’s algebra (Acta Math 126:11–51, 1971) on a manifold N with edge and boundary. If the boundary is empty then the approach corresponds to Schulze (Symposium on partial differential equations (Holzhau, 1988), BSB Teubner, Leipzig, 1989) and other papers from the subsequent development. For non-trivial boundary we study Mellin-edge quantizations and compositions within the structure in terms a new Mellin-edge quantization, compared with a more traditional technique. Similar structures in the closed case have been studied in Gil et al.
The majority of earthquakes occur unexpectedly and can trigger subsequent sequences of events that can culminate in more powerful earthquakes. This self-exciting nature of seismicity generates complex clustering of earthquakes in space and time. Therefore, the problem of constraining the magnitude of the largest expected earthquake during a future time interval is of critical importance in mitigating earthquake hazard. We address this problem by developing a methodology to compute the probabilities for such extreme earthquakes to be above certain magnitudes. We combine the Bayesian methods with the extreme value theory and assume that the occurrence of earthquakes can be described by the Epidemic Type Aftershock Sequence process. We analyze in detail the application of this methodology to the 2016 Kumamoto, Japan, earthquake sequence. We are able to estimate retrospectively the probabilities of having large subsequent earthquakes during several stages of the evolution of this sequence.
We show that the Dirac operator on a compact globally hyperbolic Lorentzian spacetime with spacelike Cauchy boundary is a Fredholm operator if appropriate boundary conditions are imposed. We prove that the index of this operator is given by the same expression as in the index formula of Atiyah-Patodi-Singer for Riemannian manifolds with boundary. The index is also shown to equal that of a certain operator constructed from the evolution operator and a spectral projection on the boundary. In case the metric is of product type near the boundary a Feynman parametrix is constructed.
We generalise disagreement percolation to Gibbs point processes of balls with varying radii. This allows to establish the uniqueness of the Gibbs measure and exponential decay of pair correlations in the low activity regime by comparison with a sub-critical Boolean model. Applications to the Continuum Random Cluster model and the Quermass-interaction model are presented. At the core of our proof lies an explicit dependent thinning from a Poisson point process to a dominated Gibbs point process. (C) 2018 Elsevier B.V. All rights reserved.
We obtain a Bernstein-type inequality for sums of Banach-valued random variables satisfying a weak dependence assumption of general type and under certain smoothness assumptions of the underlying Banach norm. We use this inequality in order to investigate in the asymptotical regime the error upper bounds for the broad family of spectral regularization methods for reproducing kernel decision rules, when trained on a sample coming from a tau-mixing process.
We study the spectral location of a strongly pattern equivariant Hamiltonians arising through configurations on a colored lattice. Roughly speaking, two configurations are "close to each other" if, up to a translation, they "almost coincide" on a large fixed ball. The larger this ball, the more similar they are, and this induces a metric on the space of the corresponding dynamical systems. Our main result states that the map which sends a given configuration into the spectrum of its associated Hamiltonian, is Holder (even Lipschitz) continuous in the usual Hausdorff metric. Specifically, the spectral distance of two Hamiltonians is estimated by the distance of the corresponding dynamical systems.
We present new conditions for semigroups of positive operators to converge strongly as time tends to infinity. Our proofs are based on a novel approach combining the well-known splitting theorem by Jacobs, de Leeuw, and Glicksberg with a purely algebraic result about positive group representations. Thus, we obtain convergence theorems not only for one-parameter semigroups but also for a much larger class of semigroup representations. Our results allow for a unified treatment of various theorems from the literature that, under technical assumptions, a bounded positive C-0-semigroup containing or dominating a kernel operator converges strongly as t ->infinity. We gain new insights into the structure theoretical background of those theorems and generalize them in several respects; especially we drop any kind of continuity or regularity assumption with respect to the time parameter.
We prove the Fréchet differentiability with respect to the drift of Perron–Frobenius and Koopman operators associated to time-inhomogeneous ordinary stochastic differential equations. This result relies on a similar differentiability result for pathwise expectations of path functionals of the solution of the stochastic differential equation, which we establish using Girsanov's formula. We demonstrate the significance of our result in the context of dynamical systems and operator theory, by proving continuously differentiable drift dependence of the simple eigen- and singular values and the corresponding eigen- and singular functions of the stochastic Perron–Frobenius and Koopman operators.
Our first result concerns a characterization by means of a functional equation of Poisson point processes conditioned by the value of their first moment. It leads to a generalized version of Mecke’s formula. En passant, it also allows us to gain quantitative results about stochastic domination for Poisson point processes under linear constraints. Since bridges of a pure jump Lévy process in Rd with a height a can be interpreted as a Poisson point process on space–time conditioned by pinning its first moment to a, our approach allows us to characterize bridges of Lévy processes by means of a functional equation. The latter result has two direct applications: First, we obtain a constructive and simple way to sample Lévy bridge dynamics; second, it allows us to estimate the number of jumps for such bridges. We finally show that our method remains valid for linearly perturbed Lévy processes like periodic Ornstein–Uhlenbeck processes driven by Lévy noise.
The accepted idea that there exists an inherent finite-time barrier in deterministically predicting atmospheric flows originates from Edward N. Lorenz’s 1969 work based on two-dimensional (2D) turbulence. Yet, known analytic results on the 2D Navier–Stokes (N-S) equations suggest that one can skillfully predict the 2D N-S system indefinitely far ahead should the initial-condition error become sufficiently small, thereby presenting a potential conflict with Lorenz’s theory. Aided by numerical simulations, the present work reexamines Lorenz’s model and reviews both sides of the argument, paying particular attention to the roles played by the slope of the kinetic energy spectrum. It is found that when this slope is shallower than −3, the Lipschitz continuity of analytic solutions (with respect to initial conditions) breaks down as the model resolution increases, unless the viscous range of the real system is resolved—which remains practically impossible. This breakdown leads to the inherent finite-time limit. If, on the other hand, the spectral slope is steeper than −3, then the breakdown does not occur. In this way, the apparent contradiction between the analytic results and Lorenz’s theory is reconciled.
The XI international conference Stochastic and Analytic Methods in Mathematical Physics was held in Yerevan 2 – 7 September 2019 and was dedicated to the memory of the great mathematician Robert Adol’fovich Minlos, who passed away in January 2018.
The present volume collects a large majority of the contributions presented at the conference on the following domains of contemporary interest: classical and quantum statistical physics, mathematical methods in quantum mechanics, stochastic analysis, applications of point processes in statistical mechanics. The authors are specialists from Armenia, Czech Republic, Denmark, France, Germany, Italy, Japan, Lithuania, Russia, UK and Uzbekistan.
A particular aim of this volume is to offer young scientists basic material in order to inspire their future research in the wide fields presented here.
Hypersubstitutions are mappings which map operation symbols to terms. Terms can be visualized by trees. Hypersubstitutions can be extended to mappings defined on sets of trees. The nodes of the trees, describing terms, are labelled by operation symbols and by colors, i.e. certain positive integers. We are interested in mappings which map differently-colored operation symbols to different terms. In this paper we extend the theory of hypersubstitutions and solid varieties to multi-hypersubstitutions and colored solid varieties. We develop the interconnections between such colored terms and multihypersubstitutions and the equational theory of Universal Algebra. The collection of all varieties of a given type forms a complete lattice which is very complex and difficult to study; multi-hypersubstitutions and colored solid varieties offer a new method to study complete sublattices of this lattice.
In this paper, we determine necessary and sufficient conditions for Bruck-Reilly and generalized Bruck-Reilly ∗-extensions of arbitrary monoids to be regular, coregular and strongly π-inverse. These semigroup classes have applications in various field of mathematics, such as matrix theory, discrete mathematics and p-adic analysis (especially in operator theory). In addition, while regularity and coregularity have so many applications in the meaning of boundaries (again in operator theory), inverse monoids and Bruck-Reilly extensions contain a mixture fixed-point results of algebra, topology and geometry within the purposes of this journal.
The efficient time integration of the dynamic core equations for numerical weather prediction (NWP) remains a key challenge. One of the most popular methods is currently provided by implementations of the semi-implicit semi-Lagrangian (SISL) method, originally proposed by Robert (J. Meteorol. Soc. Jpn., 1982). Practical implementations of the SISL method are, however, not without certain shortcomings with regard to accuracy, conservation properties and stability. Based on recent work by Gottwald, Frank and Reich (LNCSE, Springer, 2002), Frank, Reich, Staniforth, White and Wood (Atm. Sci. Lett., 2005) and Wood, Staniforth and Reich (Atm. Sci. Lett., 2006) we propose an alternative semi-Lagrangian implementation based on a set of regularized equations and the popular Stormer-Verlet time stepping method in the context of the shallow-water equations (SWEs). Ultimately, the goal is to develop practical implementations for the 3D Euler equations that overcome some or all shortcomings of current SISL implementations.
The success of the ensemble Kalman filter has triggered a strong interest in expanding its scope beyond classical state estimation problems. In this paper, we focus on continuous-time data assimilation where the model and measurement errors are correlated and both states and parameters need to be identified. Such scenarios arise from noisy and partial observations of Lagrangian particles which move under a stochastic velocity field involving unknown parameters. We take an appropriate class of McKean–Vlasov equations as the starting point to derive ensemble Kalman–Bucy filter algorithms for combined state and parameter estimation. We demonstrate their performance through a series of increasingly complex multi-scale model systems.
In this study we present iterative regularization methods using rational approximations, in particular, Pade approximants, which work well for ill-posed problems. We prove that the (k,j)-Pade method is a convergent and order optimal iterative regularization method in using the discrepancy principle of Morozov. Furthermore, we present a hybrid Pade method, compare it with other well-known methods and found that it is faster than the Landweber method. It is worth mentioning that this study is a completion of the paper [A. Kirsche, C. Bockmann, Rational approximations for ill-conditioned equation systems, Appl. Math. Comput. 171 (2005) 385-397] where this method was treated to solve ill-conditioned equation systems. (c) 2006 Elsevier Inc. All rights reserved.
A time-staggered semi-Lagrangian discretization of the rotating shallow-water equations is proposed and analysed. Application of regularization to the geopotential field used in the momentum equations leads to an unconditionally stable scheme. The analysis, together with a fully nonlinear example application, suggests that this approach is a promising, efficient, and accurate alternative to traditional schemes.
We study mixed boundary value problems for an elliptic operator A on a manifold X with boundary Y, i.e., Au = f in int X, T (+/-) u = g(+/-) on int Y+/-, where Y is subdivided into subsets Y+/- with an interface Z and boundary conditions T+/- on Y+/- that are Shapiro-Lopatinskij elliptic up to Z from the respective sides. We assume that Z subset of Y is a manifold with conical singularity v. As an example we consider the Zaremba problem, where A is the Laplacian and T- Dirichlet, T+ Neumann conditions. The problem is treated as a corner boundary value problem near v which is the new point and the main difficulty in this paper. Outside v the problem belongs to the edge calculus as is shown in Bull. Sci. Math. ( to appear). With a mixed problem we associate Fredholm operators in weighted corner Sobolev spaces with double weights, under suitable edge conditions along Z {v} of trace and potential type. We construct parametrices within the calculus and establish the regularity of solutions.
We introduce an abstract concept of quantum field theory on categories fibered in groupoids over the category of spacetimes. This provides us with a general and flexible framework to study quantum field theories defined on spacetimes with extra geometric structures such as bundles, connections and spin structures. Using right Kan extensions, we can assign to any such theory an ordinary quantum field theory defined on the category of spacetimes and we shall clarify under which conditions it satisfies the axioms of locally covariant quantum field theory. The same constructions can be performed in a homotopy theoretic framework by using homotopy right Kan extensions, which allows us to obtain first toy-models of homotopical quantum field theories resembling some aspects of gauge theories.
In a recent paper, the Lefschetz number for endomorphisms (modulo trace class operators) of sequences of trace class curvature was introduced. We show that this is a well defined, canonical extension of the classical Lefschetz number and establish the homotopy invariance of this number. Moreover, we apply the results to show that the Lefschetz fixed point formula holds for geometric quasiendomorphisms of elliptic quasicomplexes.
This paper is concerned with localization properties of coherent states. Instead of classical uncertainty relations we consider "generalized" localization quantities. This is done by introducing measures on the reproducing kernel. In this context we may prove the existence of optimally localized states. Moreover, we provide a numerical scheme for deriving them.
The aim of this paper is to express the Conley-Zehnder index of a symplectic path in terms of an index due to Leray and which has been studied by one of us in a previous work. This will allow us to prove a formula for the Conley-Zehnder index of the product of two symplectic paths in terms of a symplectic Cayley transform. We apply our results to a rigorous study of the Weyl representation of metaplectic operators, which plays a crucial role in the understanding of semiclassical quantization of Hamiltonian systems exhibiting chaotic behavior.
We prove the existence of sectors of minimal growth for general closed extensions of elliptic cone operators under natural ellipticity conditions. This is achieved by the construction of a suitable parametrix and reduction to the boundary. Special attention is devoted to the clarification of the analytic structure of the resolvent.
Special p-forms are forms which have components fµ1…µp equal to +1, -1 or 0 in some orthonormal basis. A p-form ϕ ∈ pRd is called democratic if the set of nonzero components {ϕμ1...μp} is symmetric under the transitive action of a subgroup of O(d,Z) on the indices {1, . . . , d}. Knowledge of these symmetry groups allows us to define mappings of special democratic p-forms in d dimensions to special democratic P-forms in D dimensions for successively higher P = p and D = d. In particular, we display a remarkable nested structure of special forms including a U(3)-invariant 2-form in six dimensions, a G2-invariant 3-form in seven dimensions, a Spin(7)-invariant 4-form in eight dimensions and a special democratic 6-form O in ten dimensions. The latter has the remarkable property that its contraction with one of five distinct bivectors, yields, in the orthogonal eight dimensions, the Spin(7)-invariant 4-form. We discuss various properties of this ten dimensional form.
Renormalisation and locality
(2020)
Continuous insight into biological processes has led to the development of large-scale, mechanistic systems biology models of pharmacologically relevant networks. While these models are typically designed to study the impact of diverse stimuli or perturbations on multiple system variables, the focus in pharmacological research is often on a specific input, e.g., the dose of a drug, and a specific output related to the drug effect or response in terms of some surrogate marker.
To study a chosen input-output pair, the complexity of the interactions as well as the size of the models hinders easy access and understanding of the details of the input-output relationship.
The objective of this thesis is the development of a mathematical approach, in specific a model reduction technique, that allows (i) to quantify the importance of the different state variables for a given input-output relationship, and (ii) to reduce the dynamics to its essential features -- allowing for a physiological interpretation of state variables as well as parameter estimation in the statistical analysis of clinical data. We develop a model reduction technique using a control theoretic setting by first defining a novel type of time-limited controllability and observability gramians for nonlinear systems. We then show the superiority of the time-limited generalised gramians for nonlinear systems in the context of balanced truncation for a benchmark system from control theory.
The concept of time-limited controllability and observability gramians is subsequently used to introduce a state and time-dependent quantity called the input-response (ir) index that quantifies the importance of state variables for a given input-response relationship at a particular time.
We subsequently link our approach to sensitivity analysis, thus, enabling for the first time the use of sensitivity coefficients for state space reduction. The sensitivity based ir-indices are given as a product of two sensitivity coefficients. This allows not only for a computational more efficient calculation but also for a clear distinction of the extent to which the input impacts a state variable and the extent to which a state variable impacts the output.
The ir-indices give insight into the coordinated action of specific state variables for a chosen input-response relationship.
Our developed model reduction technique results in reduced models that still allow for a mechanistic interpretation in terms of the quantities/state variables of the original system, which is a key requirement in the field of systems pharmacology and systems biology and distinguished the reduced models from so-called empirical drug effect models. The ir-indices are explicitly defined with respect to a reference trajectory and thereby dependent on the initial state (this is an important feature of the measure). This is demonstrated for an example from the field of systems pharmacology, showing that the reduced models are very informative in their ability to detect (genetic) deficiencies in certain physiological entities. Comparing our novel model reduction technique to the already existing techniques shows its superiority.
The novel input-response index as a measure of the importance of state variables provides a powerful tool for understanding the complex dynamics of large-scale systems in the context of a specific drug-response relationship. Furthermore, the indices provide a means for a very efficient model order reduction and, thus, an important step towards translating insight from biological processes incorporated in detailed systems pharmacology models into the population analysis of clinical data.
Quantum field theory on curved spacetimes is understood as a semiclassical approximation of some quantum theory of gravitation, which models a quantum field under the influence of a classical gravitational field, that is, a curved spacetime. The most remarkable effect predicted by this approach is the creation of particles by the spacetime itself, represented, for instance, by Hawking's evaporation of black holes or the Unruh effect. On the other hand, these aspects already suggest that certain cornerstones of Minkowski quantum field theory, more precisely a preferred vacuum state and, consequently, the concept of particles, do not have sensible counterparts within a theory on general curved spacetimes. Likewise, the implementation of covariance in the model has to be reconsidered, as curved spacetimes usually lack any non-trivial global symmetry. Whereas this latter issue has been resolved by introducing the paradigm of locally covariant quantum field theory (LCQFT), the absence of a reasonable concept for distinct vacuum and particle states on general curved spacetimes has become manifest even in the form of no-go-theorems.
Within the framework of algebraic quantum field theory, one first introduces observables, while states enter the game only afterwards by assigning expectation values to them. Even though the construction of observables is based on physically motivated concepts, there is still a vast number of possible states, and many of them are not reasonable from a physical point of view. We infer that this notion is still too general, that is, further physical constraints are required. For instance, when dealing with a free quantum field theory driven by a linear field equation, it is natural to focus on so-called quasifree states. Furthermore, a suitable renormalization procedure for products of field operators is vitally important. This particularly concerns the expectation values of the energy momentum tensor, which correspond to distributional bisolutions of the field equation on the curved spacetime. J. Hadamard's theory of hyperbolic equations provides a certain class of bisolutions with fixed singular part, which therefore allow for an appropriate renormalization scheme.
By now, this specification of the singularity structure is known as the Hadamard condition and widely accepted as the natural generalization of the spectral condition of flat quantum field theory. Moreover, due to Radzikowski's celebrated results, it is equivalent to a local condition, namely on the wave front set of the bisolution. This formulation made the powerful tools of microlocal analysis, developed by Duistermaat and Hörmander, available for the verification of the Hadamard property as well as the construction of corresponding Hadamard states, which initiated much progress in this field. However, although indispensable for the investigation in the characteristics of operators and their parametrices, microlocal analyis is not practicable for the study of their non-singular features and central results are typically stated only up to smooth objects. Consequently, Radzikowski's work almost directly led to existence results and, moreover, a concrete pattern for the construction of Hadamard bidistributions via a Hadamard series. Nevertheless, the remaining properties (bisolution, causality, positivity) are ensured only modulo smooth functions.
It is the subject of this thesis to complete this construction for linear and formally self-adjoint wave operators acting on sections in a vector bundle over a globally hyperbolic Lorentzian manifold. Based on Wightman's solution of d'Alembert's equation on Minkowski space and the construction for the advanced and retarded fundamental solution, we set up a Hadamard series for local parametrices and derive global bisolutions from them. These are of Hadamard form and we show existence of smooth bisections such that the sum also satisfies the remaining properties exactly.
Data assimilation has been an active area of research in recent years, owing to its wide utility. At the core of data assimilation are filtering, prediction, and smoothing procedures. Filtering entails incorporation of measurements' information into the model to gain more insight into a given state governed by a noisy state space model. Most natural laws are governed by time-continuous nonlinear models. For the most part, the knowledge available about a model is incomplete; and hence uncertainties are approximated by means of probabilities. Time-continuous filtering, therefore, holds promise for wider usefulness, for it offers a means of combining noisy measurements with imperfect model to provide more insight on a given state.
The solution to time-continuous nonlinear Gaussian filtering problem is provided for by the Kushner-Stratonovich equation. Unfortunately, the Kushner-Stratonovich equation lacks a closed-form solution. Moreover, the numerical approximations based on Taylor expansion above third order are fraught with computational complications. For this reason, numerical methods based on Monte Carlo methods have been resorted to. Chief among these methods are sequential Monte-Carlo methods (or particle filters), for they allow for online assimilation of data. Particle filters are not without challenges: they suffer from particle degeneracy, sample impoverishment, and computational costs arising from resampling.
The goal of this thesis is to:— i) Review the derivation of Kushner-Stratonovich equation from first principles and its extant numerical approximation methods, ii) Study the feedback particle filters as a way of avoiding resampling in particle filters, iii) Study joint state and parameter estimation in time-continuous settings, iv) Apply the notions studied to linear hyperbolic stochastic differential equations.
The interconnection between Itô integrals and stochastic partial differential equations and those of Stratonovich is introduced in anticipation of feedback particle filters. With these ideas and motivated by the variants of ensemble Kalman-Bucy filters founded on the structure of the innovation process, a feedback particle filter with randomly perturbed innovation is proposed. Moreover, feedback particle filters based on coupling of prediction and analysis measures are proposed. They register a better performance than the bootstrap particle filter at lower ensemble sizes.
We study joint state and parameter estimation, both by means of extended state spaces and by use of dual filters. Feedback particle filters seem to perform well in both cases. Finally, we apply joint state and parameter estimation in the advection and wave equation, whose velocity is spatially varying. Two methods are employed: Metropolis Hastings with filter likelihood and a dual filter comprising of Kalman-Bucy filter and ensemble Kalman-Bucy filter. The former performs better than the latter.