510 Mathematik
Refine
Year of publication
Document Type
- Preprint (373)
- Article (263)
- Doctoral Thesis (76)
- Postprint (45)
- Monograph/Edited Volume (13)
- Other (10)
- Master's Thesis (6)
- Part of a Book (5)
- Conference Proceeding (5)
- Review (3)
Language
- English (753)
- German (46)
- French (3)
- Multiple languages (1)
Keywords
- random point processes (18)
- statistical mechanics (18)
- stochastic analysis (18)
- index (14)
- boundary value problems (12)
- Fredholm property (10)
- regularization (10)
- cluster expansion (9)
- elliptic operators (9)
- data assimilation (8)
Institute
- Institut für Mathematik (740)
- Extern (14)
- Mathematisch-Naturwissenschaftliche Fakultät (14)
- Institut für Physik und Astronomie (13)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (7)
- Institut für Biochemie und Biologie (6)
- Institut für Informatik und Computational Science (5)
- Department Psychologie (4)
- Department Grundschulpädagogik (3)
- Hasso-Plattner-Institut für Digital Engineering GmbH (3)
A time-staggered semi-Lagrangian discretization of the rotating shallow-water equations is proposed and analysed. Application of regularization to the geopotential field used in the momentum equations leads to an unconditionally stable scheme. The analysis, together with a fully nonlinear example application, suggests that this approach is a promising, efficient, and accurate alternative to traditional schemes.
We study mixed boundary value problems for an elliptic operator A on a manifold X with boundary Y, i.e., Au = f in int X, T (+/-) u = g(+/-) on int Y+/-, where Y is subdivided into subsets Y+/- with an interface Z and boundary conditions T+/- on Y+/- that are Shapiro-Lopatinskij elliptic up to Z from the respective sides. We assume that Z subset of Y is a manifold with conical singularity v. As an example we consider the Zaremba problem, where A is the Laplacian and T- Dirichlet, T+ Neumann conditions. The problem is treated as a corner boundary value problem near v which is the new point and the main difficulty in this paper. Outside v the problem belongs to the edge calculus as is shown in Bull. Sci. Math. ( to appear). With a mixed problem we associate Fredholm operators in weighted corner Sobolev spaces with double weights, under suitable edge conditions along Z {v} of trace and potential type. We construct parametrices within the calculus and establish the regularity of solutions.
We introduce an abstract concept of quantum field theory on categories fibered in groupoids over the category of spacetimes. This provides us with a general and flexible framework to study quantum field theories defined on spacetimes with extra geometric structures such as bundles, connections and spin structures. Using right Kan extensions, we can assign to any such theory an ordinary quantum field theory defined on the category of spacetimes and we shall clarify under which conditions it satisfies the axioms of locally covariant quantum field theory. The same constructions can be performed in a homotopy theoretic framework by using homotopy right Kan extensions, which allows us to obtain first toy-models of homotopical quantum field theories resembling some aspects of gauge theories.
In a recent paper, the Lefschetz number for endomorphisms (modulo trace class operators) of sequences of trace class curvature was introduced. We show that this is a well defined, canonical extension of the classical Lefschetz number and establish the homotopy invariance of this number. Moreover, we apply the results to show that the Lefschetz fixed point formula holds for geometric quasiendomorphisms of elliptic quasicomplexes.
This paper is concerned with localization properties of coherent states. Instead of classical uncertainty relations we consider "generalized" localization quantities. This is done by introducing measures on the reproducing kernel. In this context we may prove the existence of optimally localized states. Moreover, we provide a numerical scheme for deriving them.
The aim of this paper is to express the Conley-Zehnder index of a symplectic path in terms of an index due to Leray and which has been studied by one of us in a previous work. This will allow us to prove a formula for the Conley-Zehnder index of the product of two symplectic paths in terms of a symplectic Cayley transform. We apply our results to a rigorous study of the Weyl representation of metaplectic operators, which plays a crucial role in the understanding of semiclassical quantization of Hamiltonian systems exhibiting chaotic behavior.
We prove the existence of sectors of minimal growth for general closed extensions of elliptic cone operators under natural ellipticity conditions. This is achieved by the construction of a suitable parametrix and reduction to the boundary. Special attention is devoted to the clarification of the analytic structure of the resolvent.
Special p-forms are forms which have components fµ1…µp equal to +1, -1 or 0 in some orthonormal basis. A p-form ϕ ∈ pRd is called democratic if the set of nonzero components {ϕμ1...μp} is symmetric under the transitive action of a subgroup of O(d,Z) on the indices {1, . . . , d}. Knowledge of these symmetry groups allows us to define mappings of special democratic p-forms in d dimensions to special democratic P-forms in D dimensions for successively higher P = p and D = d. In particular, we display a remarkable nested structure of special forms including a U(3)-invariant 2-form in six dimensions, a G2-invariant 3-form in seven dimensions, a Spin(7)-invariant 4-form in eight dimensions and a special democratic 6-form O in ten dimensions. The latter has the remarkable property that its contraction with one of five distinct bivectors, yields, in the orthogonal eight dimensions, the Spin(7)-invariant 4-form. We discuss various properties of this ten dimensional form.
Renormalisation and locality
(2020)
Continuous insight into biological processes has led to the development of large-scale, mechanistic systems biology models of pharmacologically relevant networks. While these models are typically designed to study the impact of diverse stimuli or perturbations on multiple system variables, the focus in pharmacological research is often on a specific input, e.g., the dose of a drug, and a specific output related to the drug effect or response in terms of some surrogate marker.
To study a chosen input-output pair, the complexity of the interactions as well as the size of the models hinders easy access and understanding of the details of the input-output relationship.
The objective of this thesis is the development of a mathematical approach, in specific a model reduction technique, that allows (i) to quantify the importance of the different state variables for a given input-output relationship, and (ii) to reduce the dynamics to its essential features -- allowing for a physiological interpretation of state variables as well as parameter estimation in the statistical analysis of clinical data. We develop a model reduction technique using a control theoretic setting by first defining a novel type of time-limited controllability and observability gramians for nonlinear systems. We then show the superiority of the time-limited generalised gramians for nonlinear systems in the context of balanced truncation for a benchmark system from control theory.
The concept of time-limited controllability and observability gramians is subsequently used to introduce a state and time-dependent quantity called the input-response (ir) index that quantifies the importance of state variables for a given input-response relationship at a particular time.
We subsequently link our approach to sensitivity analysis, thus, enabling for the first time the use of sensitivity coefficients for state space reduction. The sensitivity based ir-indices are given as a product of two sensitivity coefficients. This allows not only for a computational more efficient calculation but also for a clear distinction of the extent to which the input impacts a state variable and the extent to which a state variable impacts the output.
The ir-indices give insight into the coordinated action of specific state variables for a chosen input-response relationship.
Our developed model reduction technique results in reduced models that still allow for a mechanistic interpretation in terms of the quantities/state variables of the original system, which is a key requirement in the field of systems pharmacology and systems biology and distinguished the reduced models from so-called empirical drug effect models. The ir-indices are explicitly defined with respect to a reference trajectory and thereby dependent on the initial state (this is an important feature of the measure). This is demonstrated for an example from the field of systems pharmacology, showing that the reduced models are very informative in their ability to detect (genetic) deficiencies in certain physiological entities. Comparing our novel model reduction technique to the already existing techniques shows its superiority.
The novel input-response index as a measure of the importance of state variables provides a powerful tool for understanding the complex dynamics of large-scale systems in the context of a specific drug-response relationship. Furthermore, the indices provide a means for a very efficient model order reduction and, thus, an important step towards translating insight from biological processes incorporated in detailed systems pharmacology models into the population analysis of clinical data.
Quantum field theory on curved spacetimes is understood as a semiclassical approximation of some quantum theory of gravitation, which models a quantum field under the influence of a classical gravitational field, that is, a curved spacetime. The most remarkable effect predicted by this approach is the creation of particles by the spacetime itself, represented, for instance, by Hawking's evaporation of black holes or the Unruh effect. On the other hand, these aspects already suggest that certain cornerstones of Minkowski quantum field theory, more precisely a preferred vacuum state and, consequently, the concept of particles, do not have sensible counterparts within a theory on general curved spacetimes. Likewise, the implementation of covariance in the model has to be reconsidered, as curved spacetimes usually lack any non-trivial global symmetry. Whereas this latter issue has been resolved by introducing the paradigm of locally covariant quantum field theory (LCQFT), the absence of a reasonable concept for distinct vacuum and particle states on general curved spacetimes has become manifest even in the form of no-go-theorems.
Within the framework of algebraic quantum field theory, one first introduces observables, while states enter the game only afterwards by assigning expectation values to them. Even though the construction of observables is based on physically motivated concepts, there is still a vast number of possible states, and many of them are not reasonable from a physical point of view. We infer that this notion is still too general, that is, further physical constraints are required. For instance, when dealing with a free quantum field theory driven by a linear field equation, it is natural to focus on so-called quasifree states. Furthermore, a suitable renormalization procedure for products of field operators is vitally important. This particularly concerns the expectation values of the energy momentum tensor, which correspond to distributional bisolutions of the field equation on the curved spacetime. J. Hadamard's theory of hyperbolic equations provides a certain class of bisolutions with fixed singular part, which therefore allow for an appropriate renormalization scheme.
By now, this specification of the singularity structure is known as the Hadamard condition and widely accepted as the natural generalization of the spectral condition of flat quantum field theory. Moreover, due to Radzikowski's celebrated results, it is equivalent to a local condition, namely on the wave front set of the bisolution. This formulation made the powerful tools of microlocal analysis, developed by Duistermaat and Hörmander, available for the verification of the Hadamard property as well as the construction of corresponding Hadamard states, which initiated much progress in this field. However, although indispensable for the investigation in the characteristics of operators and their parametrices, microlocal analyis is not practicable for the study of their non-singular features and central results are typically stated only up to smooth objects. Consequently, Radzikowski's work almost directly led to existence results and, moreover, a concrete pattern for the construction of Hadamard bidistributions via a Hadamard series. Nevertheless, the remaining properties (bisolution, causality, positivity) are ensured only modulo smooth functions.
It is the subject of this thesis to complete this construction for linear and formally self-adjoint wave operators acting on sections in a vector bundle over a globally hyperbolic Lorentzian manifold. Based on Wightman's solution of d'Alembert's equation on Minkowski space and the construction for the advanced and retarded fundamental solution, we set up a Hadamard series for local parametrices and derive global bisolutions from them. These are of Hadamard form and we show existence of smooth bisections such that the sum also satisfies the remaining properties exactly.
Data assimilation has been an active area of research in recent years, owing to its wide utility. At the core of data assimilation are filtering, prediction, and smoothing procedures. Filtering entails incorporation of measurements' information into the model to gain more insight into a given state governed by a noisy state space model. Most natural laws are governed by time-continuous nonlinear models. For the most part, the knowledge available about a model is incomplete; and hence uncertainties are approximated by means of probabilities. Time-continuous filtering, therefore, holds promise for wider usefulness, for it offers a means of combining noisy measurements with imperfect model to provide more insight on a given state.
The solution to time-continuous nonlinear Gaussian filtering problem is provided for by the Kushner-Stratonovich equation. Unfortunately, the Kushner-Stratonovich equation lacks a closed-form solution. Moreover, the numerical approximations based on Taylor expansion above third order are fraught with computational complications. For this reason, numerical methods based on Monte Carlo methods have been resorted to. Chief among these methods are sequential Monte-Carlo methods (or particle filters), for they allow for online assimilation of data. Particle filters are not without challenges: they suffer from particle degeneracy, sample impoverishment, and computational costs arising from resampling.
The goal of this thesis is to:— i) Review the derivation of Kushner-Stratonovich equation from first principles and its extant numerical approximation methods, ii) Study the feedback particle filters as a way of avoiding resampling in particle filters, iii) Study joint state and parameter estimation in time-continuous settings, iv) Apply the notions studied to linear hyperbolic stochastic differential equations.
The interconnection between Itô integrals and stochastic partial differential equations and those of Stratonovich is introduced in anticipation of feedback particle filters. With these ideas and motivated by the variants of ensemble Kalman-Bucy filters founded on the structure of the innovation process, a feedback particle filter with randomly perturbed innovation is proposed. Moreover, feedback particle filters based on coupling of prediction and analysis measures are proposed. They register a better performance than the bootstrap particle filter at lower ensemble sizes.
We study joint state and parameter estimation, both by means of extended state spaces and by use of dual filters. Feedback particle filters seem to perform well in both cases. Finally, we apply joint state and parameter estimation in the advection and wave equation, whose velocity is spatially varying. Two methods are employed: Metropolis Hastings with filter likelihood and a dual filter comprising of Kalman-Bucy filter and ensemble Kalman-Bucy filter. The former performs better than the latter.
On a smooth complete Riemannian spin manifold with smooth compact boundary, we demonstrate that Atiyah-Singer Dirac operator in depends Riesz continuously on perturbations of local boundary conditions The Lipschitz bound for the map depends on Lipschitz smoothness and ellipticity of and bounds on Ricci curvature and its first derivatives as well as a lower bound on injectivity radius away from a compact neighbourhood of the boundary. More generally, we prove perturbation estimates for functional calculi of elliptic operators on manifolds with local boundary conditions.
One method of embedding groups into skew fields was introduced by A. I. Mal'tsev and B. H. Neumann (cf. [18, 19]). If G is an ordered group and F is a skew field, the set F((G)) of formal power series over F in G with well-ordered support forms a skew field into which the group ring F[G] can be embedded. Unfortunately it is not suficient that G is left-ordered since F((G)) is only an F-vector space in this case as there is no natural way to define a multiplication on F((G)). One way to extend the original idea onto left-ordered groups is to examine the endomorphism ring of F((G)) as explored by N. I. Dubrovin (cf. [5, 6]). It is possible to embed any crossed product ring F[G; η, σ] into the endomorphism ring of F((G)) such that each non-zero element of F[G; η, σ] defines an automorphism of F((G)) (cf. [5, 10]). Thus, the rational closure of F[G; η, σ] in the endomorphism ring of F((G)), which we will call the Dubrovin-ring of F[G; η, σ], is a potential candidate for a skew field of fractions of F[G; η, σ]. The methods of N. I. Dubrovin allowed to show that specific classes of groups can be embedded into a skew field. For example, N. I. Dubrovin contrived some special criteria, which are applicable on the universal covering group of SL(2, R). These methods have also been explored by J. Gräter and R. P. Sperner (cf. [10]) as well as N.H. Halimi and T. Ito (cf. [11]). Furthermore, it is of interest to know if skew fields of fractions are unique. For example, left and right Ore domains have unique skew fields of fractions (cf. [2]). This is not the general case as for example the free group with 2 generators can be embedded into non-isomorphic skew fields of fractions (cf. [12]). It seems likely that Ore domains are the most general case for which unique skew fields of fractions exist. One approach to gain uniqueness is to restrict the search to skew fields of fractions with additional properties. I. Hughes has defined skew fields of fractions of crossed product rings F[G; η, σ] with locally indicable G which fulfill a special condition. These are called Hughes-free skew fields of fractions and I. Hughes has proven that they are unique if they exist [13, 14]. This thesis will connect the ideas of N. I. Dubrovin and I. Hughes. The first chapter contains the basic terminology and concepts used in this thesis. We present methods provided by N. I. Dubrovin such as the complexity of elements in rational closures and special properties of endomorphisms of the vector space of formal power series F((G)). To combine the ideas of N.I. Dubrovin and I. Hughes we introduce Conradian left-ordered groups of maximal rank and examine their connection to locally indicable groups. Furthermore we provide notations for crossed product rings, skew fields of fractions as well as Dubrovin-rings and prove some technical statements which are used in later parts. The second chapter focuses on Hughes-free skew fields of fractions and their connection to Dubrovin-rings. For that purpose we introduce series representations to interpret elements of Hughes-free skew fields of fractions as skew formal Laurent series. This 1 Introduction allows us to prove that for Conradian left-ordered groups G of maximal rank the statement "F[G; η, σ] has a Hughes-free skew field of fractions" implies "The Dubrovin ring of F [G; η, σ] is a skew field". We will also prove the reverse and apply the results to give a new prove of Theorem 1 in [13]. Furthermore we will show how to extend injective ring homomorphisms of some crossed product rings onto their Hughes-free skew fields of fractions. At last we will be able to answer the open question whether Hughes--free skew fields are strongly Hughes-free (cf. [17, page 53]).
Optimization is a core part of technological advancement and is usually heavily aided by computers. However, since many optimization problems are hard, it is unrealistic to expect an optimal solution within reasonable time. Hence, heuristics are employed, that is, computer programs that try to produce solutions of high quality quickly. One special class are estimation-of-distribution algorithms (EDAs), which are characterized by maintaining a probabilistic model over the problem domain, which they evolve over time. In an iterative fashion, an EDA uses its model in order to generate a set of solutions, which it then uses to refine the model such that the probability of producing good solutions is increased.
In this thesis, we theoretically analyze the class of univariate EDAs over the Boolean domain, that is, over the space of all length-n bit strings. In this setting, the probabilistic model of a univariate EDA consists of an n-dimensional probability vector where each component denotes the probability to sample a 1 for that position in order to generate a bit string.
My contribution follows two main directions: first, we analyze general inherent properties of univariate EDAs. Second, we determine the expected run times of specific EDAs on benchmark functions from theory. In the first part, we characterize when EDAs are unbiased with respect to the problem encoding. We then consider a setting where all solutions look equally good to an EDA, and we show that the probabilistic model of an EDA quickly evolves into an incorrect model if it is always updated such that it does not change in expectation.
In the second part, we first show that the algorithms cGA and MMAS-fp are able to efficiently optimize a noisy version of the classical benchmark function OneMax. We perturb the function by adding Gaussian noise with a variance of σ², and we prove that the algorithms are able to generate the true optimum in a time polynomial in σ² and the problem size n. For the MMAS-fp, we generalize this result to linear functions. Further, we prove a run time of Ω(n log(n)) for the algorithm UMDA on (unnoisy) OneMax. Last, we introduce a new algorithm that is able to optimize the benchmark functions OneMax and LeadingOnes both in O(n log(n)), which is a novelty for heuristics in the domain we consider.
We show that the residue density of the logarithm of a generalized Laplacian on a closed manifold definesan invariant polynomial-valued differential form. We express it in terms of a finite sum of residues ofclassical pseudodifferential symbols. In the case of the square of a Dirac operator, these formulas providea pedestrian proof of the Atiyah–Singer formula for a pure Dirac operator in four dimensions and for atwisted Dirac operator on a flat space of any dimension. These correspond to special cases of a moregeneral formula by Scott and Zagier. In our approach, which is of perturbative nature, we use either aCampbell–Hausdorff formula derived by Okikiolu or a noncommutative Taylor-type formula.
We show that the residue density of the logarithm of a generalized Laplacian on a closed manifold defines an invariant polynomial-valued differential form. We express it in terms of a finite sum of residues of
classical pseudodifferential symbols. In the case of the square of a Dirac operator, these formulas provide a pedestrian proof of the Atiyah–Singer formula for a pure Dirac operator in four dimensions and for a
twisted Dirac operator on a flat space of any dimension. These correspond to special cases of a more general formula by Scott and Zagier. In our approach, which is of perturbative nature, we use either a Campbell–Hausdorff formula derived by Okikiolu or a noncommutative Taylor-type formula.
We study origin, parameter optimization, and thermodynamic efficiency of isothermal rocking ratchets based on fractional subdiffusion within a generalized non-Markovian Langevin equation approach. A corresponding multi-dimensional Markovian embedding dynamics is realized using a set of auxiliary Brownian particles elastically coupled to the central Brownian particle (see video on the journal web site). We show that anomalous subdiffusive transport emerges due to an interplay of nonlinear response and viscoelastic effects for fractional Brownian motion in periodic potentials with broken space-inversion symmetry and driven by a time-periodic field. The anomalous transport becomes optimal for a subthreshold driving when the driving period matches a characteristic time scale of interwell transitions. It can also be optimized by varying temperature, amplitude of periodic potential and driving strength. The useful work done against a load shows a parabolic dependence on the load strength. It grows sublinearly with time and the corresponding thermodynamic efficiency decays algebraically in time because the energy supplied by the driving field scales with time linearly. However, it compares well with the efficiency of normal diffusion rocking ratchets on an appreciably long time scale.
In this thesis we introduce the concept of the degree of formality. It is directed against a dualistic point of view, which only distinguishes between formal and informal proofs. This dualistic attitude does not respect the differences between the argumentations classified as informal and it is unproductive because the individual potential of the respective argumentation styles cannot be appreciated and remains untapped.
This thesis has two parts. In the first of them we analyse the concept of the degree of formality (including a discussion about the respective benefits for each degree) while in the second we demonstrate its usefulness in three case studies. In the first case study we will repair Haskell B. Curry's view of mathematics, which incidentally is of great importance in the first part of this thesis, in light of the different degrees of formality. In the second case study we delineate how awareness of the different degrees of formality can be used to help students to learn how to prove. Third, we will show how the advantages of proofs of different degrees of formality can be combined by the development of so called tactics having a medium degree of formality. Together the three case studies show that the degrees of formality provide a convincing solution to the problem of untapped potential.
In various biological systems and small scale technological applications particles transiently bind to a cylindrical surface. Upon unbinding the particles diffuse in the vicinal bulk before rebinding to the surface. Such bulk-mediated excursions give rise to an effective surface translation, for which we here derive and discuss the dynamic equations, including additional surface diffusion. We discuss the time evolution of the number of surface-bound particles, the effective surface mean squared displacement, and the surface propagator. In particular, we observe sub- and superdiffusive regimes. A plateau of the surface mean-squared displacement reflects a stalling of the surface diffusion at longer times. Finally, the corresponding first passage problem for the cylindrical geometry is analysed.
We study pattern-forming instabilities in reaction-advection-diffusion systems. We develop an approach based on Lyapunov-Bloch exponents to figure out the impact of a spatially periodic mixing flow on the stability of a spatially homogeneous state. We deal with the flows periodic in space that may have arbitrary time dependence. We propose a discrete in time model, where reaction, advection, and diffusion act as successive operators, and show that a mixing advection can lead to a pattern-forming instability in a two-component system where only one of the species is advected. Physically, this can be explained as crossing a threshold of Turing instability due to effective increase of one of the diffusion constants.
The space missions Voyager and Cassini together with earthbound observations re-vealed a wealth of structures in Saturn’s rings. There are, for example, waves being excited at ring positions which are in orbital resonance with Saturn’s moons. Other structures can be assigned to embedded moons like empty gaps, moon induced wakes or S-shaped propeller features. Further-more, irregular radial structures are observed in the range from 10 meters until kilometers. Here some of these structures will be discussed in the frame of hydrodynamical modeling of Saturn’s dense rings. For this purpose we will characterize the physical properties of the ring particle ensemble by mean field quantities and point to the special behavior of the transport coefficients. We show that unperturbed rings can become unstable and how diffusion acts in the rings. Additionally, the alternative streamline formalism is introduced to describe perturbed regions of dense rings with applications to the wake damping and the dispersion relation of the density waves.
In this thesis, we discuss the characterization of orthogroups by so-called disjunctions of identities. The orthogroups are a subclass of the class of completely regular semigroups, a generalization of the concept of a group. Thus there is for all elements of an orthogroup some kind of an inverse element such that both elements commute. Based on a fundamental result by A.H. Clifford, every completely regular semigroup is a semilattice of completely simple semigroups. This allows the description the gross structure of such semigroup. In particular every orthogroup is a semilattice of rectangular groups which are isomorphic to direct products of rectangular bands and groups. Semilattices of rectangular groups coming from various classes are characterized using the concept of an alternative variety, a generalization of the classical idea of a variety by Birkhoff.
After starting with some fundamental definitions and results concerning semigroups, we introduce the concept of disjunctions of identities and summarize some necessary properties. In particular we present some disjunction of identities which is sufficient for a semigroup for being completely regular. Furthermore we derive from this identity some statements concerning Rees matrix semigroups, a possible representation of completely simple semigroups. A main result of this thesis is the general description of disjunctions of identities such that a completely regular semigroup satisfying the described identity is a semilattice of left groups (right groups / groups). In this case the completely regular semigroup is an orthogroup. Furthermore we define various classes of rectangular groups such that there is an exponent taken from a set of pairwise coprime positive integers. An important result is the characterization of the class of all semilattices of particular rectangular groups (taken from the classes defined before) using a set-theoretic minimal set of disjunctions of identities. Additionally we investigate semilattices of groups (so-called Clifford semigroups). For this purpose we consider abelian groups of particular exponents and prove some well-known results from the theory of Clifford semigroups in an alternative way applying the concept of disjunctions of identities. As a practical application of the results concerning semilattices of left zero semigroups and right zero semigroups we identify a particular transformation semigroup. For more detailed information about the product of two arbitrary elements of a semilattice of semigroups we introduce the concept of strong semilattices of semigroups. It is well-known that a semilattice of groups is a strong semilattice of groups. So we can characterize a strong semilattice of groups of particular pairwise coprime exponents by disjunctions of identities. Additionally we describe the class of all strong semilattices of left zero semigroups and right zero semigroups with the help of such kind of identity, and we relate this statement to the theory of normal bands. A possible extension of the already described semilattices of rectangular groups can be achieved by an auxiliary total order (in terms of chains of semigroups). To this end we present a corresponding characterization due to disjunctions of identities which is obviously minimal. A list of open questions which have arisen during the research for this thesis, but left crude, is attached.
In the thesis there are constructed new quantizations for pseudo-differential boundary value problems (BVPs) on manifolds with edge. The shape of operators comes from Boutet de Monvel’s calculus which exists on smooth manifolds with boundary. The singular case, here with edge and boundary, is much more complicated. The present approach simplifies the operator-valued symbolic structures by using suitable Mellin quantizations on infinite stretched model cones of wedges with boundary. The Mellin symbols themselves are, modulo smoothing ones, with asymptotics, holomorphic in the complex Mellin covariable. One of the main results is the construction of parametrices of elliptic elements in the corresponding operator algebra, including elliptic edge conditions.
In 2015 the second conference „Cloud Storage Deployment in Academics“ took place. Interest regarding this issue was again high and topics established in 2014 like data security and scalability were complemented by new ones like federations or technical integration in existing infrastructures. This is caused by the advances in the establishment of cloud-based storage systems. This publication contains the contributions of the conference „Cloud Storage Deployment in Academics 2015“, which took place in may 2015 at TU Berlin.
Neue Medien“ war über viele Jahre hinweg das Codewort für Computer, die den Einzug in den Schulunterricht schaffen sollten – wenn es nach den Befürwortern ging. Die Widerstände, gerade in der Grundschule, waren groß und vielfältig. Es ist verständlich, dass kurz nach der spielerischen Heranführung an Bildung im Kindergarten, in einer Zeit, in der die Schülerinnen und Schüler auch das soziale Miteinander einüben müssen und auch fein- und grobmotorische Fähigkeiten erwerben sollen, das vereinzelnde Sitzen vor einem Bildschirm nicht zu den obersten Prioritäten gehört – und auch unserer Meinung nach nicht gehören sollte. In den letzten Jahren hat sich der Begriff der neuen Medien aber verändert, und das, was bisher damit verbunden wurde, ist mit der „Digitalisierung“ nicht nur des Schulunterrichts, sondern des ganzen Lebens, zu einem Dreh- und Angelpunkt der Bildung geworden. Statt klobigen Computern mit Bildschirmen, die das Miteinander schon über die Ausstattung der Computerräume in die falsche Bahn lenken, haben mobile Geräte in der Hand der Schülerinnen und Schüler übernommen. Diese können nun gemeinsam an einem Gerät arbeiten, sie können direkt mit den Bildschirminhalten interagieren, sie können die Kameras, Mikrophone und Sensoren nutzen, um authentische Daten zu erfassen und zu verarbeiten, sie können auch außerhalb des Klassenraums oder der Schule damit arbeiten und haben inzwischen fast jederzeit das ganze Wissen des Internets mit dabei. Schwerpunkt dieses Bandes ist daher der Umgang mit Tablets und den darauf laufenden „Apps“ im Mathematikunterricht. In fünf Beiträgen werden konkrete Unterrichtsvorschläge gemacht, die als Blaupausen für App-gestützten Unterricht dienen können. Ergänzt wird dieser Band durch einen allgemeinen Leitfaden zur Beurteilung von Apps für den Mathematikunterricht samt Beispielen.
A doppelalgebra is an algebra defined on a vector space with two binary linear associative operations. Doppelalgebras play a prominent role in algebraic K-theory. We consider doppelsemigroups, that is, sets with two binary associative operations satisfying the axioms of a doppelalgebra. Doppelsemigroups are a generalization of semigroups and they have relationships with such algebraic structures as interassociative semigroups, restrictive bisemigroups, dimonoids, and trioids.
In the lecture notes numerous examples of doppelsemigroups and of strong doppelsemigroups are given. The independence of axioms of a strong doppelsemigroup is established. A free product in the variety of doppelsemigroups is presented. We also construct a free (strong) doppelsemigroup, a free commutative (strong) doppelsemigroup, a free n-nilpotent (strong) doppelsemigroup, a free n-dinilpotent (strong) doppelsemigroup, and a free left n-dinilpotent doppelsemigroup. Moreover, the least commutative congruence, the least n-nilpotent congruence, the least n-dinilpotent congruence on a free (strong) doppelsemigroup and the least left n-dinilpotent congruence on a free doppelsemigroup are characterized.
The book addresses graduate students, post-graduate students, researchers in algebra and interested readers.
The Cauchy problem for the linearised Einstein equation and the Goursat problem for wave equations
(2017)
In this thesis, we study two initial value problems arising in general relativity. The first is the Cauchy problem for the linearised Einstein equation on general globally hyperbolic spacetimes, with smooth and distributional initial data. We extend well-known results by showing that given a solution to the linearised constraint equations of arbitrary real Sobolev regularity, there is a globally defined solution, which is unique up to addition of gauge solutions. Two solutions are considered equivalent if they differ by a gauge solution. Our main result is that the equivalence class of solutions depends continuously on the corre- sponding equivalence class of initial data. We also solve the linearised constraint equations in certain cases and show that there exist arbitrarily irregular (non-gauge) solutions to the linearised Einstein equation on Minkowski spacetime and Kasner spacetime.
In the second part, we study the Goursat problem (the characteristic Cauchy problem) for wave equations. We specify initial data on a smooth compact Cauchy horizon, which is a lightlike hypersurface. This problem has not been studied much, since it is an initial value problem on a non-globally hyperbolic spacetime. Our main result is that given a smooth function on a non-empty, smooth, compact, totally geodesic and non-degenerate Cauchy horizon and a so called admissible linear wave equation, there exists a unique solution that is defined on the globally hyperbolic region and restricts to the given function on the Cauchy horizon. Moreover, the solution depends continuously on the initial data. A linear wave equation is called admissible if the first order part satisfies a certain condition on the Cauchy horizon, for example if it vanishes. Interestingly, both existence of solution and uniqueness are false for general wave equations, as examples show. If we drop the non-degeneracy assumption, examples show that existence of solution fails even for the simplest wave equation. The proof requires precise energy estimates for the wave equation close to the Cauchy horizon. In case the Ricci curvature vanishes on the Cauchy horizon, we show that the energy estimates are strong enough to prove local existence and uniqueness for a class of non-linear wave equations. Our results apply in particular to the Taub-NUT spacetime and the Misner spacetime. It has recently been shown that compact Cauchy horizons in spacetimes satisfying the null energy condition are necessarily smooth and totally geodesic. Our results therefore apply if the spacetime satisfies the null energy condition and the Cauchy horizon is compact and non-degenerate.
The first main goal of this thesis is to develop a concept of approximate differentiability of higher order for subsets of the Euclidean space that allows to characterize higher order rectifiable sets, extending somehow well known facts for functions. We emphasize that for every subset A of the Euclidean space and for every integer k ≥ 2 we introduce the approximate differential of order k of A and we prove it is a Borel map whose domain is a (possibly empty) Borel set. This concept could be helpful to deal with higher order rectifiable sets in applications.
The other goal is to extend to general closed sets a well known theorem of Alberti on the second order rectifiability properties of the boundary of convex bodies. The Alberti theorem provides a stratification of second order rectifiable subsets of the boundary of a convex body based on the dimension of the (convex) normal cone. Considering a suitable generalization of this normal cone for general closed subsets of the Euclidean space and employing some results from the first part we can prove that the same stratification exists for every closed set.
Integral Fourier operators
(2017)
This volume of contributions based on lectures delivered at a school on Fourier Integral Operators
held in Ouagadougou, Burkina Faso, 14–26 September 2015, provides an introduction to Fourier Integral Operators (FIO) for a readership of Master and PhD students as well as any interested layperson. Considering the wide
spectrum of their applications and the richness of the mathematical tools they involve, FIOs lie the cross-road of many a field. This volume offers
the necessary background, whether analytic or geometric, to get acquainted with FIOs, complemented by more advanced material presenting various aspects of active research in that area.
The interdisciplinary workshop STOCHASTIC PROCESSES WITH APPLICATIONS IN THE NATURAL SCIENCES was held in Bogotá, at Universidad de los Andes from December 5 to December 9, 2016. It brought together researchers from Colombia, Germany, France, Italy, Ukraine, who communicated recent progress in the mathematical research related to stochastic processes with application in biophysics.
The present volume collects three of the four courses held at this meeting by Angelo Valleriani, Sylvie Rœlly and Alexei Kulik.
A particular aim of this collection is to inspire young scientists in setting up research goals within the wide scope of fields represented in this volume.
Angelo Valleriani, PhD in high energy physics, is group leader of the team "Stochastic processes in complex and biological systems" from the Max-Planck-Institute of Colloids and Interfaces, Potsdam.
Sylvie Rœlly, Docteur en Mathématiques, is the head of the chair of Probability at the University of Potsdam.
Alexei Kulik, Doctor of Sciences, is a Leading researcher at the Institute of Mathematics of Ukrainian National Academy of Sciences.
In this thesis, stochastic dynamics modelling collective motions of populations, one of the most mysterious type of biological phenomena, are considered. For a system of N particle-like individuals, two kinds of asymptotic behaviours are studied : ergodicity and flocking properties, in long time, and propagation of chaos, when the number N of agents goes to infinity. Cucker and Smale, deterministic, mean-field kinetic model for a population without a hierarchical structure is the starting point of our journey : the first two chapters are dedicated to the understanding of various stochastic dynamics it inspires, with random noise added in different ways. The third chapter, an attempt to improve those results, is built upon the cluster expansion method, a technique from statistical mechanics. Exponential ergodicity is obtained for a class of non-Markovian process with non-regular drift. In the final part, the focus shifts onto a stochastic system of interacting particles derived from Keller and Segel 2-D parabolicelliptic model for chemotaxis. Existence and weak uniqueness are proven.
We analyze an inverse noisy regression model under random design with the aim of estimating the unknown target function based on a given set of data, drawn according to some unknown probability distribution. Our estimators are all constructed by kernel methods, which depend on a Reproducing Kernel Hilbert Space structure using spectral regularization methods.
A first main result establishes upper and lower bounds for the rate of convergence under a given source condition assumption, restricting the class of admissible distributions. But since kernel methods scale poorly when massive datasets are involved, we study one example for saving computation time and memory requirements in more detail. We show that Parallelizing spectral algorithms also leads to minimax optimal rates of convergence provided the number of machines is chosen appropriately.
We emphasize that so far all estimators depend on the assumed a-priori smoothness of the target function and on the eigenvalue decay of the kernel covariance operator, which are in general unknown. To obtain good purely data driven estimators constitutes the problem of adaptivity which we handle for the single machine problem via a version of the Lepskii principle.
Raum und Form
(2017)
The present work will introduce a Finite State Machine (FSM) that processes any Collatz Sequence; further, we will endeavor to investigate its behavior in relationship to transformations of a special infinite input. Moreover, we will prove that the machine’s word transformation is equivalent to the standard Collatz number transformation and subsequently discuss the possibilities for use of this approach at solving similar problems. The benefit of this approach is that the investigation of the word transformation performed by the Finite State Machine is less complicated than the traditional number-theoretical transformation.
The classical Navier-Stokes equations of hydrodynamics are usually written in terms of vector analysis. More promising is the formulation of these equations in the language of differential forms of degree one. In this way the study of Navier-Stokes equations includes the analysis of the de Rham complex. In particular, the Hodge theory for the de Rham complex enables one to eliminate the pressure from the equations. The Navier-Stokes equations constitute a parabolic system with a nonlinear term which makes sense only for one-forms. A simpler model of dynamics of incompressible viscous fluid is given by Burgers' equation. This work is aimed at the study of invariant structure of the Navier-Stokes equations which is closely related to the algebraic structure of the de Rham complex at step 1. To this end we introduce Navier-Stokes equations related to any elliptic quasicomplex of first order differential operators. These equations are quite similar to the classical Navier-Stokes equations including generalised velocity and pressure vectors. Elimination of the pressure from the generalised Navier-Stokes equations gives a good motivation for the study of the Neumann problem after Spencer for elliptic quasicomplexes. Such a study is also included in the work.We start this work by discussion of Lamé equations within the context of elliptic quasicomplexes on compact manifolds with boundary. The non-stationary Lamé equations form a hyperbolic system. However, the study of the first mixed problem for them gives a good experience to attack the linearised Navier-Stokes equations. On this base we describe a class of non-linear perturbations of the Navier-Stokes equations, for which the solvability results still hold.
During the drug discovery & development process, several phases encompassing a number of preclinical and clinical studies have to be successfully passed to demonstrate safety and efficacy of a new drug candidate. As part of these studies, the characterization of the drug's pharmacokinetics (PK) is an important aspect, since the PK is assumed to strongly impact safety and efficacy. To this end, drug concentrations are measured repeatedly over time in a study population. The objectives of such studies are to describe the typical PK time-course and the associated variability between subjects. Furthermore, underlying sources significantly contributing to this variability, e.g. the use of comedication, should be identified. The most commonly used statistical framework to analyse repeated measurement data is the nonlinear mixed effect (NLME) approach. At the same time, ample knowledge about the drug's properties already exists and has been accumulating during the discovery & development process: Before any drug is tested in humans, detailed knowledge about the PK in different animal species has to be collected. This drug-specific knowledge and general knowledge about the species' physiology is exploited in mechanistic physiological based PK (PBPK) modeling approaches -it is, however, ignored in the classical NLME modeling approach.
Mechanistic physiological based models aim to incorporate relevant and known physiological processes which contribute to the overlying process of interest. In comparison to data--driven models they are usually more complex from a mathematical perspective. For example, in many situations, the number of model parameters outrange the number of measurements and thus reliable parameter estimation becomes more complex and partly impossible. As a consequence, the integration of powerful mathematical estimation approaches like the NLME modeling approach -which is widely used in data-driven modeling -and the mechanistic modeling approach is not well established; the observed data is rather used as a confirming instead of a model informing and building input.
Another aggravating circumstance of an integrated approach is the inaccessibility to the details of the NLME methodology so that these approaches can be adapted to the specifics and needs of mechanistic modeling. Despite the fact that the NLME modeling approach exists for several decades, details of the mathematical methodology is scattered around a wide range of literature and a comprehensive, rigorous derivation is lacking. Available literature usually only covers selected parts of the mathematical methodology. Sometimes, important steps are not described or are only heuristically motivated, e.g. the iterative algorithm to finally determine the parameter estimates.
Thus, in the present thesis the mathematical methodology of NLME modeling is systemically described and complemented to a comprehensive description,
comprising the common theme from ideas and motivation to the final parameter estimation. Therein, new insights for the interpretation of different approximation methods used in the context of the NLME modeling approach are given and illustrated; furthermore, similarities and differences between them are outlined. Based on these findings, an expectation-maximization (EM) algorithm to determine estimates of a NLME model is described.
Using the EM algorithm and the lumping methodology by Pilari2010, a new approach on how PBPK and NLME modeling can be combined is presented and exemplified for the antibiotic levofloxacin. Therein, the lumping identifies which processes are informed by the available data and the respective model reduction improves the robustness in parameter estimation. Furthermore, it is shown how apriori known factors influencing the variability and apriori known unexplained variability is incorporated to further mechanistically drive the model development. Concludingly, correlation between parameters and between covariates is automatically accounted for due to the mechanistic derivation of the lumping and the covariate relationships.
A useful feature of PBPK models compared to classical data-driven PK models is in the possibility to predict drug concentration within all organs and tissue in the body. Thus, the resulting PBPK model for levofloxacin is used to predict drug concentrations and their variability within soft tissues which are the site of action for levofloxacin. These predictions are compared with data of muscle and adipose tissue obtained by microdialysis, which is an invasive technique to measure a proportion of drug in the tissue, allowing to approximate the concentrations in the interstitial fluid of tissues. Because, so far, comparing human in vivo tissue PK and PBPK predictions are not established, a new conceptual framework is derived. The comparison of PBPK model predictions and microdialysis measurements shows an adequate agreement and reveals further strengths of the presented new approach.
We demonstrated how mechanistic PBPK models, which are usually developed in the early stage of drug development, can be used as basis for model building in the analysis of later stages, i.e. in clinical studies. As a consequence, the extensively collected and accumulated knowledge about species and drug are utilized and updated with specific volunteer or patient data. The NLME approach combined with mechanistic modeling reveals new insights for the mechanistic model, for example identification and quantification of variability in mechanistic processes. This represents a further contribution to the learn & confirm paradigm across different stages of drug development.
Finally, the applicability of mechanism--driven model development is demonstrated on an example from the field of Quantitative Psycholinguistics to analyse repeated eye movement data. Our approach gives new insight into the interpretation of these experiments and the processes behind.
In a bounded domain with smooth boundary in R^3 we consider the stationary Maxwell equations
for a function u with values in R^3 subject to a nonhomogeneous condition
(u,v)_x = u_0 on
the boundary, where v is a given vector field and u_0 a function on the boundary. We specify this problem within the framework of the Riemann-Hilbert boundary value problems for the Moisil-Teodorescu system. This latter is proved to satisfy the Shapiro-Lopaniskij condition if an only if the vector v is at no point tangent to the boundary. The Riemann-Hilbert problem for the Moisil-Teodorescu system fails to possess an adjoint boundary value problem with respect to the Green formula, which satisfies the Shapiro-Lopatinskij condition. We develop the construction of Green formula to get a proper concept of adjoint boundary value problem.
Numerous reports of relatively rapid climate changes over the past century make a clear case of the impact of aerosols and clouds, identified as sources of largest uncertainty in climate projections. Earth’s radiation balance is altered by aerosols depending on their size, morphology and chemical composition. Competing effects in the atmosphere can be further studied by investigating the evolution of aerosol microphysical properties, which are the focus of the present work.
The aerosol size distribution, the refractive index, and the single scattering albedo are commonly used such properties linked to aerosol type, and radiative forcing. Highly advanced lidars (light detection and ranging) have reduced aerosol monitoring and optical profiling into a routine process. Lidar data have been widely used to retrieve the size distribution through the inversion of the so-called Lorenz-Mie model (LMM). This model offers a reasonable treatment for spherically approximated particles, it no longer provides, though, a viable description for other naturally occurring arbitrarily shaped particles, such as dust particles. On the other hand, non-spherical geometries as simple as spheroids reproduce certain optical properties with enhanced accuracy. Motivated by this, we adapt the LMM to accommodate the spheroid-particle approximation introducing the notion of a two-dimensional (2D) shape-size distribution.
Inverting only a few optical data points to retrieve the shape-size distribution is classified as a non-linear ill-posed problem. A brief mathematical analysis is presented which reveals the inherent tendency towards highly oscillatory solutions, explores the available options for a generalized solution through regularization methods and quantifies the ill-posedness. The latter will improve our understanding on the main cause fomenting instability in the produced solution spaces. The new approach facilitates the exploitation of additional lidar data points from depolarization measurements, associated with particle non-sphericity. However, the generalization of LMM vastly increases the complexity of the problem. The underlying theory for the calculation of the involved optical cross sections (T-matrix theory) is computationally so costly, that would limit a retrieval analysis to an unpractical point. Moreover the discretization of the model equation by a 2D collocation method, proposed in this work, involves double integrations which are further time consuming. We overcome these difficulties by using precalculated databases and a sophisticated retrieval software (SphInX: Spheroidal Inversion eXperiments) especially developed for our purposes, capable of performing multiple-dataset inversions and producing a wide range of microphysical retrieval outputs.
Hybrid regularization in conjunction with minimization processes is used as a basis for our algorithms. Synthetic data retrievals are performed simulating various atmospheric scenarios in order to test the efficiency of different regularization methods. The gap in contemporary literature in providing full sets of uncertainties in a wide variety of numerical instances is of major concern here. For this, the most appropriate methods are identified through a thorough analysis on an overall-behavior basis regarding accuracy and stability. The general trend of the initial size distributions is captured in our numerical experiments and the reconstruction quality depends on data error level. Moreover, the need for more or less depolarization points is explored for the first time from the point of view of the microphysical retrieval. Finally, our approach is tested in various measurement cases giving further insight for future algorithm improvements.
We study the interplay between analysis on manifolds with singularities and complex analysis and develop new structures of operators based on the Mellin transform and tools for iterating the calculus for higher singularities. We refer to the idea of interpreting boundary value problems (BVPs) in terms of pseudo-differential operators with a principal symbolic hierarchy, taking into account that BVPs are a source of cone and edge operator algebras. The respective cone and edge pseudo-differential algebras in turn are the starting point of higher corner theories. In addition there are deep relationships between corner operators and complex analysis. This will be illustrated by the Mellin symbolic calculus.
This thesis is focused on the study and the exact simulation of two classes of real-valued Brownian diffusions: multi-skew Brownian motions with constant drift and Brownian diffusions whose drift admits a finite number of jumps.
The skew Brownian motion was introduced in the sixties by Itô and McKean, who constructed it from the reflected Brownian motion, flipping its excursions from the origin with a given probability. Such a process behaves as the original one except at the point 0, which plays the role of a semipermeable barrier. More generally, a skew diffusion with several semipermeable barriers, called multi-skew diffusion, is a diffusion everywhere except when it reaches one of the barriers, where it is partially reflected with a probability depending on that particular barrier. Clearly, a multi-skew diffusion can be characterized either as solution of a stochastic differential equation involving weighted local times (these terms providing the semi-permeability) or by its infinitesimal generator as Markov process.
In this thesis we first obtain a contour integral representation for the transition semigroup of the multiskew Brownian motion with constant drift, based on a fine analysis of its complex properties. Thanks to this representation we write explicitly the transition densities of the two-skew Brownian motion with constant drift as an infinite series involving, in particular, Gaussian functions and their tails.
Then we propose a new useful application of a generalization of the known rejection sampling method. Recall that this basic algorithm allows to sample from a density as soon as one finds an - easy to sample - instrumental density verifying that the ratio between the goal and the instrumental densities is a bounded function. The generalized rejection sampling method allows to sample exactly from densities for which indeed only an approximation is known. The originality of the algorithm lies in the fact that one finally samples directly from the law without any approximation, except the machine's.
As an application, we sample from the transition density of the two-skew Brownian motion with or without constant drift. The instrumental density is the transition density of the Brownian motion with constant drift, and we provide an useful uniform bound for the ratio of the densities. We also present numerical simulations to study the efficiency of the algorithm.
The second aim of this thesis is to develop an exact simulation algorithm for a Brownian diffusion whose drift admits several jumps. In the literature, so far only the case of a continuous drift (resp. of a drift with one finite jump) was treated. The theoretical method we give allows to deal with any finite number of discontinuities. Then we focus on the case of two jumps, using the transition densities of the two-skew Brownian motion obtained before. Various examples are presented and the efficiency of our approach is discussed.
This is a brief survey of a constructive technique of analytic continuation related to an explicit integral formula of Golusin and Krylov (1933). It goes far beyond complex analysis and applies to the Cauchy problem for elliptic partial differential equations as well. As started in the classical papers, the technique is elaborated in generalised Hardy spaces also called Hardy-Smirnov spaces.
The human immunodeficiency virus (HIV) has resisted nearly three decades of efforts targeting a cure. Sustained suppression of the virus has remained a challenge, mainly due
to the remarkable evolutionary adaptation that the virus exhibits by the accumulation of drug-resistant mutations in its genome. Current therapeutic strategies aim at achieving and maintaining a low viral burden and typically involve multiple drugs. The choice of optimal combinations of these drugs is crucial, particularly in the background of treatment failure having occurred previously with certain other drugs. An understanding of the dynamics of viral mutant genotypes aids in the assessment of treatment failure with a certain drug
combination, and exploring potential salvage treatment regimens.
Mathematical models of viral dynamics have proved invaluable in understanding the viral life cycle and the impact of antiretroviral drugs. However, such models typically use simplified and coarse-grained mutation schemes, that curbs the extent of their application to drug-specific clinical mutation data, in order to assess potential next-line therapies. Statistical
models of mutation accumulation have served well in dissecting mechanisms of resistance evolution by reconstructing mutation pathways under different drug-environments. While these models perform well in predicting treatment outcomes by statistical learning, they do not incorporate drug effect mechanistically. Additionally, due to an inherent lack of
temporal features in such models, they are less informative on aspects such as predicting mutational abundance at treatment failure. This limits their application in analyzing the
pharmacology of antiretroviral drugs, in particular, time-dependent characteristics of HIV therapy such as pharmacokinetics and pharmacodynamics, and also in understanding the impact of drug efficacy on mutation dynamics.
In this thesis, we develop an integrated model of in vivo viral dynamics incorporating drug-specific mutation schemes learned from clinical data. Our combined modelling
approach enables us to study the dynamics of different mutant genotypes and assess mutational abundance at virological failure. As an application of our model, we estimate in vivo
fitness characteristics of viral mutants under different drug environments. Our approach also extends naturally to multiple-drug therapies. Further, we demonstrate the versatility of our model by showing how it can be modified to incorporate recently elucidated mechanisms of drug action including molecules that target host factors.
Additionally, we address another important aspect in the clinical management of HIV disease, namely drug pharmacokinetics. It is clear that time-dependent changes in in vivo
drug concentration could have an impact on the antiviral effect, and also influence decisions on dosing intervals. We present a framework that provides an integrated understanding
of key characteristics of multiple-dosing regimens including drug accumulation ratios and half-lifes, and then explore the impact of drug pharmacokinetics on viral suppression.
Finally, parameter identifiability in such nonlinear models of viral dynamics is always a concern, and we investigate techniques that alleviate this issue in our setting.
Change points in time series are perceived as heterogeneities in the statistical or dynamical characteristics of the observations. Unraveling such transitions yields essential information for the understanding of the observed system’s intrinsic evolution and potential external influences. A precise detection of multiple changes is therefore of great importance for various research disciplines, such as environmental sciences, bioinformatics and economics. The primary purpose of the detection approach introduced in this thesis is the investigation of transitions underlying direct or indirect climate observations. In order to develop a diagnostic approach capable to capture such a variety of natural processes, the generic statistical features in terms of central tendency and dispersion are employed in the light of Bayesian inversion. In contrast to established Bayesian approaches to multiple changes, the generic approach proposed in this thesis is not formulated in the framework of specialized partition models of high dimensionality requiring prior specification, but as a robust kernel-based approach of low dimensionality employing least informative prior distributions.
First of all, a local Bayesian inversion approach is developed to robustly infer on the location and the generic patterns of a single transition. The analysis of synthetic time series comprising changes of different observational evidence, data loss and outliers validates the performance, consistency and sensitivity of the inference algorithm. To systematically investigate time series for multiple changes, the Bayesian inversion is extended to a kernel-based inference approach. By introducing basic kernel measures, the weighted kernel inference results are composed into a proxy probability to a posterior distribution of multiple transitions. The detection approach is applied to environmental time series from the Nile river in Aswan and the weather station Tuscaloosa, Alabama comprising documented changes. The method’s performance confirms the approach as a powerful diagnostic tool to decipher multiple changes underlying direct climate observations.
Finally, the kernel-based Bayesian inference approach is used to investigate a set of complex terrigenous dust records interpreted as climate indicators of the African region of the Plio-Pleistocene period. A detailed inference unravels multiple transitions underlying the indirect climate observations, that are interpreted as conjoint changes. The identified conjoint changes coincide with established global climate events. In particular, the two-step transition associated to the establishment of the modern Walker-Circulation contributes to the current discussion about the influence of paleoclimate changes on the environmental conditions in tropical and subtropical Africa at around two million years ago.
We consider the Navier-Stokes equations in the layer R^n x [0,T] over R^n with finite T > 0. Using the standard fundamental solutions of the Laplace operator and the heat operator, we reduce the Navier-Stokes equations to a nonlinear Fredholm equation of the form (I+K) u = f, where K is a compact continuous operator in anisotropic normed Hölder spaces weighted at the point at infinity with respect to the space variables. Actually, the weight function is included to provide a finite energy estimate for solutions to the Navier-Stokes equations for all t in [0,T]. On using the particular properties of the de Rham complex we conclude that the Fréchet derivative (I+K)' is continuously invertible at each point of the Banach space under consideration and the map I+K is open and injective in the space. In this way the Navier-Stokes equations prove to induce an open one-to-one mapping in the scale of Hölder spaces.
The main results of this thesis are formulated in a class of surfaces (varifolds) generalizing closed and connected smooth submanifolds of Euclidean space which allows singularities. Given an indecomposable varifold with dimension at least two in some Euclidean space such that the first variation is locally bounded, the total variation is absolutely continuous with respect to the weight measure, the density of the weight measure is at least one outside a set of weight measure zero and the generalized mean curvature is locally summable to a natural power (dimension of the varifold minus one) with respect to the weight measure. The thesis presents an improved estimate of the set where the lower density is small in terms of the one dimensional Hausdorff measure. Moreover, if the support of the weight measure is compact, then the intrinsic diameter with respect to the support of the weight measure is estimated in terms of the generalized mean curvature. This estimate is in analogy to the diameter control for closed connected manifolds smoothly immersed in some Euclidean space of Peter Topping. Previously, it was not known whether the hypothesis in this thesis implies that two points in the support of the weight measure have finite geodesic distance.
Convoluted Brownian motion
(2016)
In this paper we analyse semimartingale properties of a class of Gaussian periodic processes, called convoluted Brownian motions, obtained by convolution between a deterministic function and a Brownian motion. A classical
example in this class is the periodic Ornstein-Uhlenbeck process. We compute their characteristics and show that in general, they are neither
Markovian nor satisfy a time-Markov field property. Nevertheless, by enlargement
of filtration and/or addition of a one-dimensional component, one can in some case recover the Markovianity. We treat exhaustively the case of the bidimensional trigonometric convoluted Brownian motion and the higher-dimensional monomial convoluted Brownian motion.
Lyapunov Exponents
(2016)
Lyapunov exponents lie at the heart of chaos theory, and are widely used in studies of complex dynamics. Utilising a pragmatic, physical approach, this self-contained book provides a comprehensive description of the concept. Beginning with the basic properties and numerical methods, it then guides readers through to the most recent advances in applications to complex systems. Practical algorithms are thoroughly reviewed and their performance is discussed, while a broad set of examples illustrate the wide range of potential applications. The description of various numerical and analytical techniques for the computation of Lyapunov exponents offers an extensive array of tools for the characterization of phenomena such as synchronization, weak and global chaos in low and high-dimensional set-ups, and localization. This text equips readers with all the investigative expertise needed to fully explore the dynamical properties of complex systems, making it ideal for both graduate students and experienced researchers.
It is "scientific folklore" coming from physical heuristics that solutions to the heat equation on a Riemannian manifold can be represented by a path integral. However, the problem with such path integrals is that they are notoriously ill-defined. One way to make them rigorous (which is often applied in physics) is finite-dimensional approximation, or time-slicing approximation: Given a fine partition of the time interval into small subintervals, one restricts the integration domain to paths that are geodesic on each subinterval of the partition. These finite-dimensional integrals are well-defined, and the (infinite-dimensional) path integral then is defined as the limit of these (suitably normalized) integrals, as the mesh of the partition tends to zero.
In this thesis, we show that indeed, solutions to the heat equation on a general compact Riemannian manifold with boundary are given by such time-slicing path integrals. Here we consider the heat equation for general Laplace type operators, acting on sections of a vector bundle. We also obtain similar results for the heat kernel, although in this case, one has to restrict to metrics satisfying a certain smoothness condition at the boundary. One of the most important manipulations one would like to do with path integrals is taking their asymptotic expansions; in the case of the heat kernel, this is the short time asymptotic expansion. In order to use time-slicing approximation here, one needs the approximation to be uniform in the time parameter. We show that this is possible by giving strong error estimates.
Finally, we apply these results to obtain short time asymptotic expansions of the heat kernel also in degenerate cases (i.e. at the cut locus). Furthermore, our results allow to relate the asymptotic expansion of the heat kernel to a formal asymptotic expansion of the infinite-dimensional path integral, which gives relations between geometric quantities on the manifold and on the loop space. In particular, we show that the lowest order term in the asymptotic expansion of the heat kernel is essentially given by the Fredholm determinant of the Hessian of the energy functional. We also investigate how this relates to the zeta-regularized determinant of the Jacobi operator along minimizing geodesics.
We prove statistical rates of convergence for kernel-based least squares regression from i.i.d. data using a conjugate gradient algorithm, where regularization against overfitting is obtained by early stopping. This method is related to Kernel Partial Least Squares, a regression method that combines supervised dimensionality reduction with least squares projection. Following the setting introduced in earlier related literature, we study so-called "fast convergence rates" depending on the regularity of the target regression function (measured by a source condition in terms of the kernel integral operator) and on the effective dimensionality of the data mapped into the kernel space. We obtain upper bounds, essentially matching known minimax lower bounds, for the L^2 (prediction) norm as well as for the stronger Hilbert norm, if the true
regression function belongs to the reproducing kernel Hilbert space. If the latter assumption is not fulfilled, we obtain similar convergence rates for appropriate norms, provided additional unlabeled data are available.
Kreise - Punkte - Linien
(2015)
Zum Geleit
(2015)
Einführung
(2015)
Die nun begonnene Reihe „studieren++“ resultiert aus einer von der Universität Potsdam angebotenen Vorlesungsreihe. Das Besondere an dieser Vorlesungsreihe ist der multidisziplinäre Anspruch und die konsequent umgesetzte Zusammenarbeit über Disziplingrenzen hinweg. Die nicht nur über Instituts-, sondern über Fakultätsgrenzen praktizierte Interdisziplinarität erlaubt die Betrachtung eines Problems oder Sachverhalts aus unterschiedlichen Blickwinkeln. Wissenschaftliche Fragestellungen sind komplex und nicht immer auf eine Disziplin beschränkt. Sie in ihrer Gänze erfassen und nachhaltige Lösungsstrategien oder Konzepte entwickeln zu können gelingt oft nur durch eine multidisziplinäre Kooperation. Eine Lehrveranstaltung wie die vorliegende ist nicht nur für die Studierenden einer Universität eine hervorragende Möglichkeit, um über die Grenzen der eigenen Disziplin hinaus zu blicken und die Zusammenarbeit mit Wissenschaftlerinnen und Wissenschaftlern aus anderen Bereichen zu pflegen. So lernt man, sich in andere Sichtweisen hineinzuversetzen und sich zwischen den Disziplinen zu bewegen – eine Kompetenz, die in der hochkomplexen Arbeitswelt von heute von hohem Nutzen ist.
Der vorliegende erste Band der Reihe hat „Raum und Zahl“ zum Thema und ist aus einer Ringvorlesung aus dem Wintersemester 2013/2014 entstanden. Drei der fünf Fakultäten, insgesamt neun Institute der Universität Potsdam, haben sich an der Vorlesung beteiligt und sich dieses spannenden Themas angenommen. Als jemand, der sich jahrelang wissenschaftlich mit algorithmischer Geometrie sowie mit raumbezogenen Datenbanken und Navigationssystemen beschäftigt hat, kann ich nur bekräftigen, dass die Bezüge zwischen Raum und Zahl, zwischen Räumen und Zahlen, noch viel stärker im öffentlichen Bewusstsein verankert gehören. Räume auch quantitativ zu erfassen und zu verstehen ist eine Kulturtechnik, die an Wichtigkeit eher noch zunimmt, vor allem vor dem Hintergrund, dass wir genetisch nicht allzu gut auf derartige Herausforderungen vorbereitet sind. Denn viele unserer einschlägigen Gene entstammen noch aus der Zeit der Savanne, einer Zeit, zu der das Raumkonzept sich fast ausschließlich auf die unmittelbare räumliche Umgebung bezog und Zahlen jenseits von 10 nur wenig Relevanz für das eigene Überleben hatten.
Als Präsident der Universität Potsdam freut es mich ganz besonders, dass sich die hier vertretenen Wissenschaftler bereit erklärt haben, ihre Überlegungen mit den Studierenden und ihren Kolleginnen und Kollegen zu teilen. Herrn Kollegen Hans-Joachim Petsche möchte ich für sein Engagement danken und ihm zu dieser gelungenen Reihe gratulieren. Der Geist der Wissenschaft, der nicht nur einsam im Büro oder Labor gelebt wird, sondern gerade an einer Universität auch aktiv nach außen getragen werden sollte, wird hier in besonderer Weise sichtbar. Ich wünsche Ihnen viel Freude bei der Lektüre des Bandes und freue mich auf weitere Veröffentlichungen in dieser Reihe.
Using an algorithm based on a retrospective rejection sampling scheme, we propose an exact simulation of a Brownian diffusion whose drift admits several jumps. We treat explicitly and extensively the case of two jumps, providing numerical simulations. Our main contribution is to manage the technical difficulty due to the presence of two jumps thanks to a new explicit expression of the transition density of the skew Brownian motion with two semipermeable barriers and a constant drift.
When trying to extend the Hodge theory for elliptic complexes on compact closed manifolds to the case of compact manifolds with boundary one is led to a boundary value problem for
the Laplacian of the complex which is usually referred to as Neumann problem. We study the Neumann problem for a larger class of sequences of differential operators on
a compact manifold with boundary. These are sequences of small curvature, i.e., bearing the property that the composition of any two neighbouring operators has order less than two.
In many statistical applications, the aim is to model the relationship between covariates and some outcomes. A choice of the appropriate model depends on the outcome and the research objectives, such as linear models for continuous outcomes, logistic models for binary outcomes and the Cox model for time-to-event data. In epidemiological, medical, biological, societal and economic studies, the logistic regression is widely used to describe the relationship between a response variable as binary outcome and explanatory variables as a set of covariates. However, epidemiologic cohort studies are quite expensive regarding data management since following up a large number of individuals takes long time. Therefore, the case-cohort design is applied to reduce cost and time for data collection. The case-cohort sampling collects a small random sample from the entire cohort, which is called subcohort. The advantage of this design is that the covariate and follow-up data are recorded only on the subcohort and all cases (all members of the cohort who develop the event of interest during the follow-up process).
In this thesis, we investigate the estimation in the logistic model for case-cohort design. First, a model with a binary response and a binary covariate is considered. The maximum likelihood estimator (MLE) is described and its asymptotic properties are established. An estimator for the asymptotic variance of the estimator based on the maximum likelihood approach is proposed; this estimator differs slightly from the estimator introduced by Prentice (1986). Simulation results for several proportions of the subcohort show that the proposed estimator gives lower empirical bias and empirical variance than Prentice's estimator.
Then the MLE in the logistic regression with discrete covariate under case-cohort design is studied. Here the approach of the binary covariate model is extended. Proving asymptotic normality of estimators, standard errors for the estimators can be derived. The simulation study demonstrates the estimation procedure of the logistic regression model with a one-dimensional discrete covariate. Simulation results for several proportions of the subcohort and different choices of the underlying parameters indicate that the estimator developed here performs reasonably well. Moreover, the comparison between theoretical values and simulation results of the asymptotic variance of estimator is presented.
Clearly, the logistic regression is sufficient for the binary outcome refers to be available for all subjects and for a fixed time interval. Nevertheless, in practice, the observations in clinical trials are frequently collected for different time periods and subjects may drop out or relapse from other causes during follow-up. Hence, the logistic regression is not appropriate for incomplete follow-up data; for example, an individual drops out of the study before the end of data collection or an individual has not occurred the event of interest for the duration of the study. These observations are called censored observations. The survival analysis is necessary to solve these problems. Moreover, the time to the occurence of the event of interest is taken into account. The Cox model has been widely used in survival analysis, which can effectively handle the censored data. Cox (1972) proposed the model which is focused on the hazard function. The Cox model is assumed to be
λ(t|x) = λ0(t) exp(β^Tx)
where λ0(t) is an unspecified baseline hazard at time t and X is the vector of covariates, β is a p-dimensional vector of coefficient.
In this thesis, the Cox model is considered under the view point of experimental design. The estimability of the parameter β0 in the Cox model, where β0 denotes the true value of β, and the choice of optimal covariates are investigated. We give new representations of the observed information matrix In(β) and extend results for the Cox model of Andersen and Gill (1982). In this way conditions for the estimability of β0 are formulated. Under some regularity conditions, ∑ is the inverse of the asymptotic variance matrix of the MPLE of β0 in the Cox model and then some properties of the asymptotic variance matrix of the MPLE are highlighted. Based on the results of asymptotic estimability, the calculation of local optimal covariates is considered and shown in examples. In a sensitivity analysis, the efficiency of given covariates is calculated. For neighborhoods of the exponential models, the efficiencies have then been found. It is appeared that for fixed parameters β0, the efficiencies do not change very much for different baseline hazard functions. Some proposals for applicable optimal covariates and a calculation procedure for finding optimal covariates are discussed.
Furthermore, the extension of the Cox model where time-dependent coefficient are allowed, is investigated. In this situation, the maximum local partial likelihood estimator for estimating the coefficient function β(·) is described. Based on this estimator, we formulate a new test procedure for testing, whether a one-dimensional coefficient function β(·) has a prespecified parametric form, say β(·; ϑ). The score function derived from the local constant partial likelihood function at d distinct grid points is considered. It is shown that the distribution of the properly standardized quadratic form of this d-dimensional vector under the null hypothesis tends to a Chi-squared distribution. Moreover, the limit statement remains true when replacing the unknown ϑ0 by the MPLE in the hypothetical model and an asymptotic α-test is given by the quantiles or p-values of the limiting Chi-squared distribution. Finally, we propose a bootstrap version of this test. The bootstrap test is only defined for the special case of testing whether the coefficient function is constant. A simulation study illustrates the behavior of the bootstrap test under the null hypothesis and a special alternative. It gives quite good results for the chosen underlying model.
References
P. K. Andersen and R. D. Gill. Cox's regression model for counting processes: a large samplestudy. Ann. Statist., 10(4):1100{1120, 1982.
D. R. Cox. Regression models and life-tables. J. Roy. Statist. Soc. Ser. B, 34:187{220, 1972.
R. L. Prentice. A case-cohort design for epidemiologic cohort studies and disease prevention trials. Biometrika, 73(1):1{11, 1986.
We consider a statistical inverse learning problem, where we observe the image of a function f through a linear operator A at i.i.d. random design points X_i, superposed with an additional noise. The distribution of the design points is unknown and can be very general. We analyze simultaneously the direct (estimation of Af) and the inverse (estimation of f) learning problems. In this general framework, we obtain strong and weak minimax optimal rates of convergence (as the number of observations n grows large) for a large class of spectral regularization methods over regularity classes defined through appropriate source conditions. This improves on or completes previous results obtained in related settings. The optimality of the obtained rates is shown not only in the exponent in n but also in the explicit dependence of the constant factor in the variance of the noise and the radius of the source condition set.
By perturbing the differential of a (cochain-)complex by "small" operators, one obtains what is referred to as quasicomplexes, i.e. a sequence whose curvature is not equal to zero in general. In this situation the cohomology is no longer defined. Note that it depends on the structure of the underlying spaces whether or not an operator is "small." This leads to a magical mix of perturbation and regularisation theory. In the general setting of Hilbert spaces compact operators are "small." In order to develop this theory, many elements of diverse mathematical disciplines, such as functional analysis, differential geometry, partial differential equation, homological algebra and topology have to be combined. All essential basics are summarised in the first chapter of this thesis. This contains classical elements of index theory, such as Fredholm operators, elliptic pseudodifferential operators and characteristic classes. Moreover we study the de Rham complex and introduce Sobolev spaces of arbitrary order as well as the concept of operator ideals. In the second chapter, the abstract theory of (Fredholm) quasicomplexes of Hilbert spaces will be developed. From the very beginning we will consider quasicomplexes with curvature in an ideal class. We introduce the Euler characteristic, the cone of a quasiendomorphism and the Lefschetz number. In particular, we generalise Euler's identity, which will allow us to develop the Lefschetz theory on nonseparable Hilbert spaces. Finally, in the third chapter the abstract theory will be applied to elliptic quasicomplexes with pseudodifferential operators of arbitrary order. We will show that the Atiyah-Singer index formula holds true for those objects and, as an example, we will compute the Euler characteristic of the connection quasicomplex. In addition to this we introduce geometric quasiendomorphisms and prove a generalisation of the Lefschetz fixed point theorem of Atiyah and Bott.
The aim of this paper is to bring together two areas which are of great importance for the study of overdetermined boundary value problems. The first area is homological algebra which is the main tool in constructing the formal theory of overdetermined problems. And the second area is the global calculus of pseudodifferential operators which allows one to develop explicit analysis.
This article assesses the distance between the laws of stochastic differential equations with multiplicative Lévy noise on path space in terms of their characteristics. The notion of transportation distance on the set of Lévy kernels introduced by Kosenkova and Kulik yields a natural and statistically tractable upper bound on the noise sensitivity. This extends recent results for the additive case in terms of coupling distances to the multiplicative case. The strength of this notion is shown in a statistical implementation for simulations and the example of a benchmark time series in paleoclimate.
We elaborate a boundary Fourier method for studying an analogue of the Hilbert problem for analytic functions within the framework of generalised Cauchy-Riemann equations. The boundary value problem need not satisfy the Shapiro-Lopatinskij condition and so it fails to be Fredholm in Sobolev spaces. We show a solvability condition of the Hilbert problem, which looks like those for ill-posed
problems, and construct an explicit formula for approximate solutions.
We continue our study of invariant forms of the classical equations of mathematical physics,
such as the Maxwell equations or the Lamé system, on manifold with boundary. To this end we interpret them in terms of the de Rham complex at a certain step. On using the structure of the complex we get an insight to predict a degeneracy deeply encoded
in the equations. In the present paper we develop an invariant approach to the classical Navier-Stokes equations.
Microsaccades
(2015)
The first thing we do upon waking is open our eyes. Rotating them in our eye sockets, we scan our surroundings and collect the information into a picture in our head. Eye movements can be split into saccades and fixational eye movements, which occur when we attempt to fixate our gaze. The latter consists of microsaccades, drift and tremor. Before we even lift our eye lids, eye movements – such as saccades and microsaccades that let the eyes jump from one to another position – have partially been prepared in the brain stem. Saccades and microsaccades are often assumed to be generated by the same mechanisms. But how saccades and microsaccades can be classified according to shape has not yet been reported in a statistical manner. Research has put more effort into the investigations of microsaccades’ properties and generation only since the last decade. Consequently, we are only beginning to understand the dynamic processes governing microsaccadic eye movements. Within this thesis, the dynamics governing the generation of microsaccades is assessed and the development of a model for the underlying processes. Eye movement trajectories from different experiments are used, recorded with a video-based eye tracking technique, and a novel method is proposed for the scale-invariant detection of saccades (events of large amplitude) and microsaccades (events of small amplitude). Using a time-frequency approach, the method is examined with different experiments and validated against simulated data. A shape model is suggested that allows for a simple estimation of saccade- and microsaccade related properties. For sequences of microsaccades, in this thesis a time-dynamic Markov model is proposed, with a memory horizon that changes over time and which can best describe sequences of microsaccades.
We consider a Cauchy problem for the heat equation in a cylinder X x (0,T) over a domain X in the n-dimensional space with data on a strip lying on the lateral surface. The strip is of the form
S x (0,T), where S is an open subset of the boundary of X. The problem is ill-posed. Under natural restrictions on the configuration of S we derive an explicit formula for solutions of this problem.
In this paper we study the convergence of continuous Newton method for solving nonlinear equations with holomorphic mappings in complex Banach spaces. Our contribution is based on a recent progress in the geometric theory of spirallike functions. We prove convergence theorems and illustrate them by numerical simulations.
Das Schulbuch ist ein etablierter und bedeutender Bestandteil des Mathematikunterrichts. Lehrer nutzen es, um ihren Unterricht vorzubereiten und/oder zu gestalten; Schüler, um in selbigem zu lernen und zu bestehen, vielleicht sogar aus eigenem Interesse; Eltern, um sich darüber zu informieren, was ihr Kind eigentlich können soll und wie sie ihm gegebenenfalls helfen können. Darüber hinaus ist das Schulbuch ein markantes gesellschaftliches Produkt, dessen Zweck es ist, das Unterrichtsgeschehen zu steuern und zu beeinflussen. Damit ist es auch ein Anzeiger dafür, was und wie im Mathematikunterricht gelehrt werden sollte und wird. Die Lehrtexte als zentrale Bestandteile von Schulbüchern verweisen in diesem Zusammenhang insbesondere auf die Phasen der Einführung neuen Lernstoffs. Daraus legitimiert sich übergreifend die Fragestellung, was und wie (gut) Mathematikschulbuchlehrtexte lehren bzw. was und wie (gut) adressierte Schüler aus ihnen (selbstständig) lernen, d.h. Wissen erwerben können.
Angesichts der komplexen und vielfältigen Bedeutung von Schulbuchlehrtexten verwundert es, dass die mathematikdidaktische Forschung bislang wenig Interesse an ihnen zeigt: Es fehlen sowohl eine theoretische Konzeption der Größe ‚Lehrpotential eines schulmathematischen Lehrtextes‘ als auch ein analytisches Verfahren, um das anhand eines Mathematikschulbuchlehrtextes Verstehbare und Lernbare zu ermitteln. Mit der vorliegenden Arbeit wird sowohl in theoretisch-methodologischer als auch in empirischer Hinsicht der Versuch unternommen, diesen Defiziten zu begegnen. Dabei wird das ‚Lehrpotential eines Mathematikschulbuchlehrtextes‘ auf der Grundlage der kognitionspsychologischen Schematheorie und unter Einbeziehung textlinguistischer Ansätze als eine textimmanente und analytisch zugängliche Größe konzipiert. Anschließend wird das Lehrpotential von fünf Lehrtexten ausgewählter aktueller Schulbücher der Jahrgangsstufen 6 und 7 zu den Inhaltsbereichen ‚Brüche‘ und ‚lineare Funktionen‘ analysiert. Es zeigt sich, dass die untersuchten Lehrtexte aus deutschen Schulbüchern für Schüler sehr schwer verständlich sind, d.h. es ist kompliziert, einigen Teiltexten im Rahmen des Gesamttextes einen Sinn abzugewinnen. Die Lehrtexte sind insbesondere dann kaum sinnhaft lesbar, wenn ein Schüler versucht, die mitgeteilten Sachverhalte zu verstehen, d.h. Antworten auf die Fragen zu erhalten, warum ein mathematischer Sachverhalt gerade so und nicht anders ist, wozu ein neuer Sachverhalt/Begriff gebraucht wird, wie das Neue mit bereits Bekanntem zusammenhängt usw. Deutlich zugänglicher und sinnhafter erscheinen die Mathematikschulbuchlehrtexte hingegen unter der Annahme, dass ihre zentrale Botschaft in der Mitteilung besteht, welche Aufgabenstellungen in der jeweiligen Lehreinheit vorkommen und wie man sie bearbeitet. Demnach können Schüler anhand dieser Lehrtexte im Wesentlichen lernen, wie sie mit mathematischen Zeichen, die für sie kaum etwas bezeichnen, umgehen sollen. Die hier vorgelegten Analyseergebnisse gewinnen in einem soziologischen Kontext an Tragweite und Brisanz. So lässt sich aus ihnen u.a. die These ableiten, dass die analysierten Lehrtexte keine ‚unglücklichen‘ Einzelfälle sind, sondern dass die ‚Aufgabenorientierung in einem mathematischen Gewand‘ ein Charakteristikum typischer (deutscher) Mathematikschulbuchlehrtexte und – noch grundsätzlicher – einen Wesenszug typischer schulmathematischer Kommunikation darstellt.
The present lecture notes aim for an introduction to the ergodic behaviour of Markov Processes and addresses graduate students, post-graduate students and interested readers.
Different tools and methods for the study of upper bounds on uniform and weak ergodic rates of Markov Processes are introduced. These techniques are then applied to study limit theorems for functionals of Markov processes.
This lecture course originates in two mini courses held at University of Potsdam, Technical University of Berlin and Humboldt University in spring 2013 and Ritsumameikan University in summer 2013.
Alexei Kulik, Doctor of Sciences, is a Leading researcher at the Institute of Mathematics of Ukrainian National Academy of Sciences.
In this work we study reciprocal classes of Markov walks on graphs. Given a continuous time reference Markov chain on a graph, its reciprocal class is the set of all probability measures which can be represented as a mixture of the bridges of the reference walks. We characterize reciprocal classes with two different approaches. With the first approach we found it as the set of solutions to duality formulae on path space, where the differential operators have the interpretation of the addition of infinitesimal random loops to the paths of the canonical process. With the second approach we look at short time asymptotics of bridges. Both approaches allow an explicit computation of reciprocal characteristics, which are divided into two families, the loop characteristics and the arc characteristics. They are those specific functionals of the generator of the reference chain which determine its reciprocal class. We look at the specific examples such as Cayley graphs, the hypercube and planar graphs. Finally we establish the first concentration of measure results for the bridges of a continuous time Markov chain based on the reciprocal characteristics.
We describe a natural construction of deformation quantisation on a compact symplectic manifold with boundary. On the algebra of quantum observables a trace functional is defined which as usual annihilates the commutators. This gives rise to an index as the trace of the unity element. We formulate the index theorem as a conjecture and examine it by the classical harmonic oscillator.
The in-memory revolution
(2015)
This book describes the next generation of business applications enabled by SAP's in-memory database, SAP HANA. In particular, the authors show the substantial changes introduced in S4/HANA by switching to SAP HANA. Using numerous examples and use cases from the authors' wealth of real-world experience, it illustrates the quantum leap in performance made possible by the new technology. The book is written by two of the most prominent actors in the area of business application systems: Hasso Plattner, co-founder of SAP and inaugurator of the Hasso Plattner Institute at the University of Potsdam, and Bernd Leukert, member of the Executive Board and the Global Managing Board of SAP. This clearly structured, highly illustrated book takes an exciting new technology and presents the practicality and success of first mover applications.