Refine
Year of publication
- 2016 (71) (remove)
Document Type
- Article (48)
- Preprint (11)
- Doctoral Thesis (10)
- Monograph/Edited Volume (1)
- Master's Thesis (1)
Language
- English (71) (remove)
Is part of the Bibliography
- yes (71)
Keywords
Institute
- Institut für Mathematik (71) (remove)
Towards the assimilation of tree-ring-width records using ensemble Kalman filtering techniques
(2016)
This paper investigates the applicability of the Vaganov–Shashkin–Lite (VSL) forward model for tree-ring-width chronologies as observation operator within a proxy data assimilation (DA) setting. Based on the principle of limiting factors, VSL combines temperature and moisture time series in a nonlinear fashion to obtain simulated TRW chronologies. When used as observation operator, this modelling approach implies three compounding, challenging features: (1) time averaging, (2) “switching recording” of 2 variables and (3) bounded response windows leading to “thresholded response”. We generate pseudo-TRW observations from a chaotic 2-scale dynamical system, used as a cartoon of the atmosphere-land system, and attempt to assimilate them via ensemble Kalman filtering techniques. Results within our simplified setting reveal that VSL’s nonlinearities may lead to considerable loss of assimilation skill, as compared to the utilization of a time-averaged (TA) linear observation operator. In order to understand this undesired effect, we embed VSL’s formulation into the framework of fuzzy logic (FL) theory, which thereby exposes multiple representations of the principle of limiting factors. DA experiments employing three alternative growth rate functions disclose a strong link between the lack of smoothness of the growth rate function and the loss of optimality in the estimate of the TA state. Accordingly, VSL’s performance as observation operator can be enhanced by resorting to smoother FL representations of the principle of limiting factors. This finding fosters new interpretations of tree-ring-growth limitation processes.
We elaborate a boundary Fourier method for studying an analogue of the Hilbert problem for analytic functions within the framework of generalised Cauchy-Riemann equations. The boundary value problem need not satisfy the Shapiro-Lopatinskij condition and so it fails to be Fredholm in Sobolev spaces. We show a solvability condition of the Hilbert problem, which looks like those for ill-posed
problems, and construct an explicit formula for approximate solutions.
We construct equivariant KK-theory with coefficients in and R/Z as suitable inductive limits over II1-factors. We show that the Kasparov product, together with its usual functorial properties, extends to KK-theory with real coefficients. Let Gamma be a group. We define a Gamma-algebra A to be K-theoretically free and proper (KFP) if the group trace tr of Gamma acts as the unit element in KKR Gamma (A, A). We show that free and proper Gamma-algebras (in the sense of Kasparov) have the (KFP) property. Moreover, if Gamma is torsion free and satisfies the KK Gamma-form of the Baum-Connes conjecture, then every Gamma-algebra satisfies (KFP). If alpha : Gamma -> U-n is a unitary representation and A satisfies property (KFP), we construct in a canonical way a rho class rho(A)(alpha) is an element of KKR/Z1,Gamma (A A) This construction generalizes the Atiyah-Patodi-Singer K-theory class with R/Z-coefficients associated to alpha. (C) 2015 Elsevier Inc. All rights reserved.
We construct new concrete examples of relative differential characters, which we call Cheeger-Chern-Simons characters. They combine the well-known Cheeger-Simons characters with Chern-Simons forms. In the same way as Cheeger-Simons characters generalize Chern-Simons invariants of oriented closed manifolds, Cheeger-Chern-Simons characters generalize Chern-Simons invariants of oriented manifolds with boundary. We study the differential cohomology of compact Lie groups G and their classifying spaces BG. We show that the even degree differential cohomology of BG canonically splits into Cheeger-Simons characters and topologically trivial characters. We discuss the transgression in principal G-bundles and in the universal bundle. We introduce two methods to lift the universal transgression to a differential cohomology valued map. They generalize the Dijkgraaf-Witten correspondence between 3-dimensional Chern-Simons theories and Wess-Zumino-Witten terms to fully extended higher-order Chern-Simons theories. Using these lifts, we also prove two versions of a differential Hopf theorem. Using Cheeger-Chern-Simons characters and transgression, we introduce the notion of differential trivializations of universal characteristic classes. It generalizes well-established notions of differential String classes to arbitrary degree. Specializing to the class , we recover isomorphism classes of geometric string structures on Spin (n) -bundles with connection and the corresponding spin structures on the free loop space. The Cheeger-Chern-Simons character associated with the class together with its transgressions to loop space and higher mapping spaces defines a Chern-Simons theory, extended down to points. Differential String classes provide trivializations of this extended Chern-Simons theory. This setting immediately generalizes to arbitrary degree: for any universal characteristic class of principal G-bundles, we have an associated Cheeger-Chern-Simons character and extended Chern-Simons theory. Differential trivialization classes yield trivializations of this extended Chern-Simons theory.
We introduce extensions of stability selection, a method to stabilise variable selection methods introduced by Meinshausen and Buhlmann (J R Stat Soc 72:417-473, 2010). We propose to apply a base selection method repeatedly to random subsamples of observations and subsets of covariates under scrutiny, and to select covariates based on their selection frequency. We analyse the effects and benefits of these extensions. Our analysis generalizes the theoretical results of Meinshausen and Buhlmann (J R Stat Soc 72:417-473, 2010) from the case of half-samples to subsamples of arbitrary size. We study, in a theoretical manner, the effect of taking random covariate subsets using a simplified score model. Finally we validate these extensions on numerical experiments on both synthetic and real datasets, and compare the obtained results in detail to the original stability selection method.
Being motivated by open questions in gauge field theories, we consider non-standard de Rham cohomology groups for timelike compact and spacelike compact support systems. These cohomology groups are shown to be isomorphic respectively to the usual de Rham cohomology of a spacelike Cauchy surface and its counterpart with compact support. Furthermore, an analog of the usual Poincare duality for de Rham cohomology is shown to hold for the case with non-standard supports as well. We apply these results to find optimal spaces of linear observables for analogs of arbitrary degree k of both the vector potential and the Faraday tensor. The term optimal has to be intended in the following sense: The spaces of linear observables we consider distinguish between different configurations; in addition to that, there are no redundant observables. This last point in particular heavily relies on the analog of Poincare duality for the new cohomology groups. Published by AIP Publishing.
Change points in time series are perceived as heterogeneities in the statistical or dynamical characteristics of the observations. Unraveling such transitions yields essential information for the understanding of the observed system’s intrinsic evolution and potential external influences. A precise detection of multiple changes is therefore of great importance for various research disciplines, such as environmental sciences, bioinformatics and economics. The primary purpose of the detection approach introduced in this thesis is the investigation of transitions underlying direct or indirect climate observations. In order to develop a diagnostic approach capable to capture such a variety of natural processes, the generic statistical features in terms of central tendency and dispersion are employed in the light of Bayesian inversion. In contrast to established Bayesian approaches to multiple changes, the generic approach proposed in this thesis is not formulated in the framework of specialized partition models of high dimensionality requiring prior specification, but as a robust kernel-based approach of low dimensionality employing least informative prior distributions.
First of all, a local Bayesian inversion approach is developed to robustly infer on the location and the generic patterns of a single transition. The analysis of synthetic time series comprising changes of different observational evidence, data loss and outliers validates the performance, consistency and sensitivity of the inference algorithm. To systematically investigate time series for multiple changes, the Bayesian inversion is extended to a kernel-based inference approach. By introducing basic kernel measures, the weighted kernel inference results are composed into a proxy probability to a posterior distribution of multiple transitions. The detection approach is applied to environmental time series from the Nile river in Aswan and the weather station Tuscaloosa, Alabama comprising documented changes. The method’s performance confirms the approach as a powerful diagnostic tool to decipher multiple changes underlying direct climate observations.
Finally, the kernel-based Bayesian inference approach is used to investigate a set of complex terrigenous dust records interpreted as climate indicators of the African region of the Plio-Pleistocene period. A detailed inference unravels multiple transitions underlying the indirect climate observations, that are interpreted as conjoint changes. The identified conjoint changes coincide with established global climate events. In particular, the two-step transition associated to the establishment of the modern Walker-Circulation contributes to the current discussion about the influence of paleoclimate changes on the environmental conditions in tropical and subtropical Africa at around two million years ago.
In many real-world classification problems, the labels of training examples are randomly corrupted. Most previous theoretical work on classification with label noise assumes that the two classes are separable, that the label noise is independent of the true class label, or that the noise proportions for each class are known. In this work, we give conditions that are necessary and sufficient for the true class-conditional distributions to be identifiable. These conditions are weaker than those analyzed previously, and allow for the classes to be nonseparable and the noise levels to be asymmetric and unknown. The conditions essentially state that a majority of the observed labels are correct and that the true class-conditional distributions are "mutually irreducible," a concept we introduce that limits the similarity of the two distributions. For any label noise problem, there is a unique pair of true class-conditional distributions satisfying the proposed conditions, and we argue that this pair corresponds in a certain sense to maximal denoising of the observed distributions. Our results are facilitated by a connection to "mixture proportion estimation," which is the problem of estimating the maximal proportion of one distribution that is present in another. We establish a novel rate of convergence result for mixture proportion estimation, and apply this to obtain consistency of a discrimination rule based on surrogate loss minimization. Experimental results on benchmark data and a nuclear particle classification problem demonstrate the efficacy of our approach.
We prove statistical rates of convergence for kernel-based least squares regression from i.i.d. data using a conjugate gradient (CG) algorithm, where regularization against over-fitting is obtained by early stopping. This method is related to Kernel Partial Least Squares, a regression method that combines supervised dimensionality reduction with least squares projection. Following the setting introduced in earlier related literature, we study so-called "fast convergence rates" depending on the regularity of the target regression function (measured by a source condition in terms of the kernel integral operator) and on the effective dimensionality of the data mapped into the kernel space. We obtain upper bounds, essentially matching known minimax lower bounds, for the L-2 (prediction) norm as well as for the stronger Hilbert norm, if the true regression function belongs to the reproducing kernel Hilbert space. If the latter assumption is not fulfilled, we obtain similar convergence rates for appropriate norms, provided additional unlabeled data are available.
We prove statistical rates of convergence for kernel-based least squares regression from i.i.d. data using a conjugate gradient algorithm, where regularization against overfitting is obtained by early stopping. This method is related to Kernel Partial Least Squares, a regression method that combines supervised dimensionality reduction with least squares projection. Following the setting introduced in earlier related literature, we study so-called "fast convergence rates" depending on the regularity of the target regression function (measured by a source condition in terms of the kernel integral operator) and on the effective dimensionality of the data mapped into the kernel space. We obtain upper bounds, essentially matching known minimax lower bounds, for the L^2 (prediction) norm as well as for the stronger Hilbert norm, if the true
regression function belongs to the reproducing kernel Hilbert space. If the latter assumption is not fulfilled, we obtain similar convergence rates for appropriate norms, provided additional unlabeled data are available.
We consider a statistical inverse learning problem, where we observe the image of a function f through a linear operator A at i.i.d. random design points X_i, superposed with an additional noise. The distribution of the design points is unknown and can be very general. We analyze simultaneously the direct (estimation of Af) and the inverse (estimation of f) learning problems. In this general framework, we obtain strong and weak minimax optimal rates of convergence (as the number of observations n grows large) for a large class of spectral regularization methods over regularity classes defined through appropriate source conditions. This improves on or completes previous results obtained in related settings. The optimality of the obtained rates is shown not only in the exponent in n but also in the explicit dependence of the constant factor in the variance of the noise and the radius of the source condition set.
Acyclicity constraints are prevalent in knowledge representation and applications where acyclic data structures such as DAGs and trees play a role. Recently, such constraints have been considered in the satisfiability modulo theories (SMT) framework, and in this paper we carry out an analogous extension to the answer set programming (ASP) paradigm. The resulting formalism, ASP modulo acyclicity, offers a rich set of primitives to express constraints related to recursive structures. In the technical results of the paper, we relate the new generalization with standard ASP by showing (i) how acyclicity extensions translate into normal rules, (ii) how weight constraint programs can be instrumented by acyclicity extensions to capture stability in analogy to unfounded set checking, and (iii) how the gap between supported and stable models is effectively closed in the presence of such an extension. Moreover, we present an efficient implementation of acyclicity constraints by incorporating a respective propagator into the state-of-the-art ASP solver CLASP. The implementation provides a unique combination of traditional unfounded set checking with acyclicity propagation. In the experimental part, we evaluate the interplay of these orthogonal checks by equipping logic programs with supplementary acyclicity constraints. The performance results show that native support for acyclicity constraints is a worthwhile addition, furnishing a complementary modeling construct in ASP itself as well as effective means for translation-based ASP solving.
We discuss the chiral anomaly for a Weyl field in a curved background and show that a novel index theorem for the Lorentzian Dirac operator can be applied to describe the gravitational chiral anomaly. A formula for the total charge generated by the gravitational and gauge field background is derived directly in Lorentzian signature and in a mathematically rigorous manner. It contains a term identical to the integrand in the Atiyah-Singer index theorem and another term involving the.-invariant of the Cauchy hypersurfaces.
constraints
(2016)
Prior information in ill-posed inverse problem is of critical importance because it is conditioning the posterior solution and its associated variability. The problem of determining the flow evolving at the Earth's core-mantle boundary through magnetic field models derived from satellite or observatory data is no exception to the rule. This study aims to estimate what information can be extracted on the velocity field at the core-mantle boundary, when the frozen flux equation is inverted under very weakly informative, but realistic, prior constraints. Instead of imposing a converging spectrum to the flow, we simply assume that its poloidal and toroidal energy spectra are characterized by power laws. The parameters of the spectra, namely, their magnitudes, and slopes are unknown. The connection between the velocity field, its spectra parameters, and the magnetic field model is established through the Bayesian formulation of the problem. Working in two steps, we determined the time-averaged spectra of the flow within the 2001–2009.5 period, as well as the flow itself and its associated uncertainties in 2005.0. According to the spectra we obtained, we can conclude that the large-scale approximation of the velocity field is not an appropriate assumption within the time window we considered. For the flow itself, we show that although it is dominated by its equatorial symmetric component, it is very unlikely to be perfectly symmetric. We also demonstrate that its geostrophic state is questioned in different locations of the outer core.
Generalizing a linear expression over a vector space, we call a term of an arbitrary type tau linear if its every variable occurs only once. Instead of the usual superposition of terms and of the total many-sorted clone of all terms in the case of linear terms, we define the partial many-sorted superposition operation and the partial many-sorted clone that satisfies the superassociative law as weak identity. The extensions of linear hypersubstitutions are weak endomorphisms of this partial clone. For a variety V of one-sorted total algebras of type tau, we define the partial many-sorted linear clone of V as the partial quotient algebra of the partial many-sorted clone of all linear terms by the set of all linear identities of V. We prove then that weak identities of this clone correspond to linear hyperidentities of V.
Using an algorithm based on a retrospective rejection sampling scheme, we propose an exact simulation of a Brownian diffusion whose drift admits several jumps. We treat explicitly and extensively the case of two jumps, providing numerical simulations. Our main contribution is to manage the technical difficulty due to the presence of two jumps thanks to a new explicit expression of the transition density of the skew Brownian motion with two semipermeable barriers and a constant drift.
Let (M, g, k) be an initial data set for the Einstein equations of general relativity. We show that a canonical solution of the Jang equation exists in the complement of the union of all weakly future outer trapped regions in the initial data set with respect to a given end, provided that this complement contains no weakly past outer trapped regions. The graph of this solution relates the area of the horizon to the global geometry of the initial data set in a non-trivial way. We prove the existence of a Scherk-type solution of the Jang equation outside the union of all weakly future or past outer trapped regions in the initial data set. This result is a natural exterior analogue for the Jang equation of the classical Jenkins Serrin theory. We extend and complement existence theorems [19, 20, 40, 29, 18, 31, 11] for Scherk-type constant mean curvature graphs over polygonal domains in (M, g), where (M, g) is a complete Riemannian surface. We can dispense with the a priori assumptions that a sub solution exists and that (M, g) has particular symmetries. Also, our method generalizes to higher dimensions.
The aim of this paper is to bring together two areas which are of great importance for the study of overdetermined boundary value problems. The first area is homological algebra which is the main tool in constructing the formal theory of overdetermined problems. And the second area is the global calculus of pseudodifferential operators which allows one to develop explicit analysis.
We study operators on singular manifolds, here of conical or edge type, and develop a new general approach of representing asymptotics of solutions to elliptic equations close to the singularities. We introduce asymptotic parametrices, using tools from cone and edge pseudo-differential algebras. Our structures are motivated by models of many-particle physics with singular Coulomb potentials that contribute higher order singularities in Euclidean space, determined by the number of particles.
This article assesses the distance between the laws of stochastic differential equations with multiplicative Lévy noise on path space in terms of their characteristics. The notion of transportation distance on the set of Lévy kernels introduced by Kosenkova and Kulik yields a natural and statistically tractable upper bound on the noise sensitivity. This extends recent results for the additive case in terms of coupling distances to the multiplicative case. The strength of this notion is shown in a statistical implementation for simulations and the example of a benchmark time series in paleoclimate.
The human immunodeficiency virus (HIV) has resisted nearly three decades of efforts targeting a cure. Sustained suppression of the virus has remained a challenge, mainly due
to the remarkable evolutionary adaptation that the virus exhibits by the accumulation of drug-resistant mutations in its genome. Current therapeutic strategies aim at achieving and maintaining a low viral burden and typically involve multiple drugs. The choice of optimal combinations of these drugs is crucial, particularly in the background of treatment failure having occurred previously with certain other drugs. An understanding of the dynamics of viral mutant genotypes aids in the assessment of treatment failure with a certain drug
combination, and exploring potential salvage treatment regimens.
Mathematical models of viral dynamics have proved invaluable in understanding the viral life cycle and the impact of antiretroviral drugs. However, such models typically use simplified and coarse-grained mutation schemes, that curbs the extent of their application to drug-specific clinical mutation data, in order to assess potential next-line therapies. Statistical
models of mutation accumulation have served well in dissecting mechanisms of resistance evolution by reconstructing mutation pathways under different drug-environments. While these models perform well in predicting treatment outcomes by statistical learning, they do not incorporate drug effect mechanistically. Additionally, due to an inherent lack of
temporal features in such models, they are less informative on aspects such as predicting mutational abundance at treatment failure. This limits their application in analyzing the
pharmacology of antiretroviral drugs, in particular, time-dependent characteristics of HIV therapy such as pharmacokinetics and pharmacodynamics, and also in understanding the impact of drug efficacy on mutation dynamics.
In this thesis, we develop an integrated model of in vivo viral dynamics incorporating drug-specific mutation schemes learned from clinical data. Our combined modelling
approach enables us to study the dynamics of different mutant genotypes and assess mutational abundance at virological failure. As an application of our model, we estimate in vivo
fitness characteristics of viral mutants under different drug environments. Our approach also extends naturally to multiple-drug therapies. Further, we demonstrate the versatility of our model by showing how it can be modified to incorporate recently elucidated mechanisms of drug action including molecules that target host factors.
Additionally, we address another important aspect in the clinical management of HIV disease, namely drug pharmacokinetics. It is clear that time-dependent changes in in vivo
drug concentration could have an impact on the antiviral effect, and also influence decisions on dosing intervals. We present a framework that provides an integrated understanding
of key characteristics of multiple-dosing regimens including drug accumulation ratios and half-lifes, and then explore the impact of drug pharmacokinetics on viral suppression.
Finally, parameter identifiability in such nonlinear models of viral dynamics is always a concern, and we investigate techniques that alleviate this issue in our setting.
This paper extends the multilevel Monte Carlo variance reduction technique to nonlinear filtering. In particular, multilevel Monte Carlo is applied to a certain variant of the particle filter, the ensemble transform particle filter (EPTF). A key aspect is the use of optimal transport methods to re-establish correlation between coarse and fine ensembles after resampling; this controls the variance of the estimator. Numerical examples present a proof of concept of the effectiveness of the proposed method, demonstrating significant computational cost reductions (relative to the single-level ETPF counterpart) in the propagation of ensembles.
In this paper we analyze supergeometric locally covariant quantum field theories. We develop suitable categories SLoc of super-Cartan supermanifolds, which generalize Lorentz manifolds in ordinary quantum field theory, and show that, starting from a few representation theoretic and geometric data, one can construct a functor U : SLoc -> S*Alg to the category of super-*-algebras, which can be interpreted as a non-interacting super-quantum field theory. This construction turns out to disregard supersymmetry transformations as the morphism sets in the above categories are too small. We then solve this problem by using techniques from enriched category theory, which allows us to replace the morphism sets by suitable morphism supersets that contain supersymmetry transformations as their higher superpoints. We construct superquantum field theories in terms of enriched functors eU : eSLoc -> eS*Alg between the enriched categories and show that supersymmetry transformations are appropriately described within the enriched framework. As examples we analyze the superparticle in 1 vertical bar 1-dimensions and the free Wess-Zumino model in 3 vertical bar 2-dimensions.
Let (M, g) be a closed Riemannian manifold of dimension n >= 3 and let f is an element of C-infinity (M), such that the operator P-f := Delta g + f is positive. If g is flat near some point p and f vanishes around p, we can define the mass of P1 as the constant term in the expansion of the Green function of P-f at p. In this paper, we establish many results on the mass of such operators. In particular, if f := n-2/n(n-1)s(g), i.e. if P-f is the Yamabe operator, we show the following result: assume that there exists a closed simply connected non-spin manifold M such that the mass is non-negative for every metric g as above on M, then the mass is non-negative for every such metric on every closed manifold of the same dimension as M. (C) 2016 Elsevier Inc. All rights reserved.
We introduce a technique for the modeling and separation of geomagnetic field components that is based on an analysis of their correlation structures alone. The inversion is based on a Bayesian formulation, which allows the computation of uncertainties. The technique allows the incorporation of complex measurement geometries like observatory data in a simple way. We show how our technique is linked to other well-known inversion techniques. A case study based on observational data is given.
We present a simple observation showing that the heat kernel on a locally finite graph behaves for short times t roughly like t(d), where d is the combinatorial distance. This is very different from the classical Varadhan-type behavior on manifolds. Moreover, this also gives that short-time behavior and global behavior of the heat kernel are governed by two different metrics whenever the degree of the graph is not uniformly bounded.
We prove Cheeger inequalities for p-Laplacians on finite and infinite weighted graphs. Unlike in previous works, we do not impose boundedness of the vertex degree, nor do we restrict ourselves to the normalized Laplacian and, more generally, we do not impose any boundedness assumption on the geometry. This is achieved by a novel definition of the measure of the boundary which uses the idea of intrinsic metrics. For the non-normalized case, our bounds on the spectral gap of p-Laplacians are already significantly better for finite graphs and for infinite graphs they yield non-trivial bounds even in the case of unbounded vertex degree. We, furthermore, give upper bounds by the Cheeger constant and by the exponential volume growth of distance balls. (C) 2016 Elsevier Ltd. All rights reserved.
We study graphs whose vertex degree tends to infinity and which are, therefore, called rapidly branching. We prove spectral estimates, discreteness of spectrum, first order eigenvalue and Weyl asymptotics solely in terms of the vertex degree growth. The underlying techniques are estimates on the isoperimetric constant. Furthermore, we give lower volume growth bounds and we provide a new criterion for stochastic incompleteness. (C) 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
The three-space theory of problem solving predicts that the quality of a learner's model and the goal specificity of a task interact on knowledge acquisition. In Experiment 1 participants used a computer simulation of a lever system to learn about torques. They either had to test hypotheses (nonspecific goal), or to produce given values for variables (specific goal). In the good- but not in the poor-model condition they saw torque depicted as an area. Results revealed the predicted interaction. A nonspecific goal only resulted in better learning when a good model of torques was provided. In Experiment 2 participants learned to manipulate the inputs of a system to control its outputs. A nonspecific goal to explore the system helped performance when compared to a specific goal to reach certain values when participants were given a good model, but not when given a poor model that suggested the wrong hypothesis space. Our findings support the three-space theory. They emphasize the importance of understanding for problem solving and stress the need to study underlying processes.
Based on theories of scientific discovery learning (SDL) and conceptual change, this study explores students' preconceptions in the domain of torques in physics and the development of these conceptions while learning with a computer-based SDL task. As a framework we used a three-space theory of SDL and focused on model space, which is supposed to contain the current conceptualization/model of the learning domain, and on its change through hypothesis testing and experimenting. Three questions were addressed: (1) What are students' preconceptions of torques before learning about this domain? To do this a multiple-choice test for assessing students' models of torques was developed and given to secondary school students (N = 47) who learned about torques using computer simulations. (2) How do students' models of torques develop during SDL? Working with simulations led to replacement of some misconceptions with physically correct conceptions. (3) Are there differential patterns of model development and if so, how do they relate to students’ use of the simulations? By analyzing individual differences in model development, we found that an intensive use of the simulations was associated with the acquisition of correct conceptions. Thus, the three-space theory provided a useful framework for understanding conceptual change in SDL.
We analyze a general class of difference operators H-epsilon = T-epsilon + V-epsilon on l(2)(((epsilon)Z)(d)), where V-epsilon is a multi-well potential and epsilon is a small parameter. We construct approximate eigenfunctions in neighbourhoods of the different wells and give weighted l(2)-estimates for the difference of these and the exact eigenfunctions of the associated Dirichlet-operators.
Socio-political studies in mathematics education often touch complex fields of interaction between education, mathematics and the political. In this paper I present a Foucault-based framework for socio-political studies in mathematics education which may guide research in that area. In order to show the potential of such a framework, I discuss the potential and limits of Marxian ideology critique, present existing Foucault-based research on socio-political aspects of mathematics education, develop my framework and show its use in an outline of a study on socio-political aspects of calculation in the mathematics classroom.
Using Causal Effect Networks to Analyze Different Arctic Drivers of Midlatitude Winter Circulation
(2016)
In recent years, the Northern Hemisphere midlatitudes have suffered from severe winters like the extreme 2012/13 winter in the eastern United States. These cold spells were linked to a meandering upper-tropospheric jet stream pattern and a negative Arctic Oscillation index (AO). However, the nature of the drivers behind these circulation patterns remains controversial. Various studies have proposed different mechanisms related to changes in the Arctic, most of them related to a reduction in sea ice concentrations or increasing Eurasian snow cover. Here, a novel type of time series analysis, called causal effect networks (CEN), based on graphical models is introduced to assess causal relationships and their time delays between different processes. The effect of different Arctic actors on winter circulation on weekly to monthly time scales is studied, and robust network patterns are found. Barents and Kara sea ice concentrations are detected to be important external drivers of the midlatitude circulation, influencing winter AO via tropospheric mechanisms and through processes involving the stratosphere. Eurasia snow cover is also detected to have a causal effect on sea level pressure in Asia, but its exact role on AO remains unclear. The CEN approach presented in this study overcomes some difficulties in interpreting correlation analyses, complements model experiments for testing hypotheses involving teleconnections, and can be used to assess their validity. The findings confirm that sea ice concentrations in autumn in the Barents and Kara Seas are an important driver of winter circulation in the midlatitudes.
Using a global symbol calculus for pseudodifferential operators on tori, we build a canonical trace on classical pseudodifferential operators on noncommutative tori in terms of a canonical discrete sum on the underlying toroidal symbols. We characterise the canonical trace on operators on the noncommutative torus as well as its underlying canonical discrete sum on symbols of fixed (resp. any) noninteger order. On the grounds of this uniqueness result, we prove that in the commutative setup, this canonical trace on the noncommutative torus reduces to Kontsevich and Vishik's canonical trace which is thereby identified with a discrete sum. A similar characterisation for the noncommutative residue on noncommutative tori as the unique trace which vanishes on trace-class operators generalises Fathizadeh and Wong's characterisation in so far as it includes the case of operators of fixed integer order. By means of the canonical trace, we derive defect formulae for regularized traces. The conformal invariance of the $ \zeta $-function at zero of the Laplacian on the noncommutative torus is then a straightforward consequence.
It is "scientific folklore" coming from physical heuristics that solutions to the heat equation on a Riemannian manifold can be represented by a path integral. However, the problem with such path integrals is that they are notoriously ill-defined. One way to make them rigorous (which is often applied in physics) is finite-dimensional approximation, or time-slicing approximation: Given a fine partition of the time interval into small subintervals, one restricts the integration domain to paths that are geodesic on each subinterval of the partition. These finite-dimensional integrals are well-defined, and the (infinite-dimensional) path integral then is defined as the limit of these (suitably normalized) integrals, as the mesh of the partition tends to zero.
In this thesis, we show that indeed, solutions to the heat equation on a general compact Riemannian manifold with boundary are given by such time-slicing path integrals. Here we consider the heat equation for general Laplace type operators, acting on sections of a vector bundle. We also obtain similar results for the heat kernel, although in this case, one has to restrict to metrics satisfying a certain smoothness condition at the boundary. One of the most important manipulations one would like to do with path integrals is taking their asymptotic expansions; in the case of the heat kernel, this is the short time asymptotic expansion. In order to use time-slicing approximation here, one needs the approximation to be uniform in the time parameter. We show that this is possible by giving strong error estimates.
Finally, we apply these results to obtain short time asymptotic expansions of the heat kernel also in degenerate cases (i.e. at the cut locus). Furthermore, our results allow to relate the asymptotic expansion of the heat kernel to a formal asymptotic expansion of the infinite-dimensional path integral, which gives relations between geometric quantities on the manifold and on the loop space. In particular, we show that the lowest order term in the asymptotic expansion of the heat kernel is essentially given by the Fredholm determinant of the Hessian of the energy functional. We also investigate how this relates to the zeta-regularized determinant of the Jacobi operator along minimizing geodesics.
Let A be a nonlinear differential operator on an open set X subset of R-n and S a closed subset of X. Given a class F of functions in X, the set S is said to be removable for F relative to A if any weak solution of A(u) = 0 in XS of class F satisfies this equation weakly in all of X. For the most extensively studied classes F, we show conditions on S which guarantee that S is removable for F relative to A.
We study the interplay between analysis on manifolds with singularities and complex analysis and develop new structures of operators based on the Mellin transform and tools for iterating the calculus for higher singularities. We refer to the idea of interpreting boundary value problems (BVPs) in terms of pseudo-differential operators with a principal symbolic hierarchy, taking into account that BVPs are a source of cone and edge operator algebras. The respective cone and edge pseudo-differential algebras in turn are the starting point of higher corner theories. In addition there are deep relationships between corner operators and complex analysis. This will be illustrated by the Mellin symbolic calculus.
A manifold M with smooth edge Y is locally near Y modelled on X-Delta x Omega for a cone X-Delta := ( (R) over bar (+) x X)/({0} x X) where Xis a smooth manifold and Omega subset of R-q an open set corresponding to a chart on Y. Compared with pseudo-differential algebras, based on other quantizations of edge-degenerate symbols, we extend the approach with Mellin representations on the r half-axis up to r = infinity, the conical exit of X-boolean AND = R+ x X (sic) (r, x) at infinity. The alternative description of the edge calculus is useful for pseudo-differential structures on manifolds with higher singularities.
This thesis is focused on the study and the exact simulation of two classes of real-valued Brownian diffusions: multi-skew Brownian motions with constant drift and Brownian diffusions whose drift admits a finite number of jumps.
The skew Brownian motion was introduced in the sixties by Itô and McKean, who constructed it from the reflected Brownian motion, flipping its excursions from the origin with a given probability. Such a process behaves as the original one except at the point 0, which plays the role of a semipermeable barrier. More generally, a skew diffusion with several semipermeable barriers, called multi-skew diffusion, is a diffusion everywhere except when it reaches one of the barriers, where it is partially reflected with a probability depending on that particular barrier. Clearly, a multi-skew diffusion can be characterized either as solution of a stochastic differential equation involving weighted local times (these terms providing the semi-permeability) or by its infinitesimal generator as Markov process.
In this thesis we first obtain a contour integral representation for the transition semigroup of the multiskew Brownian motion with constant drift, based on a fine analysis of its complex properties. Thanks to this representation we write explicitly the transition densities of the two-skew Brownian motion with constant drift as an infinite series involving, in particular, Gaussian functions and their tails.
Then we propose a new useful application of a generalization of the known rejection sampling method. Recall that this basic algorithm allows to sample from a density as soon as one finds an - easy to sample - instrumental density verifying that the ratio between the goal and the instrumental densities is a bounded function. The generalized rejection sampling method allows to sample exactly from densities for which indeed only an approximation is known. The originality of the algorithm lies in the fact that one finally samples directly from the law without any approximation, except the machine's.
As an application, we sample from the transition density of the two-skew Brownian motion with or without constant drift. The instrumental density is the transition density of the Brownian motion with constant drift, and we provide an useful uniform bound for the ratio of the densities. We also present numerical simulations to study the efficiency of the algorithm.
The second aim of this thesis is to develop an exact simulation algorithm for a Brownian diffusion whose drift admits several jumps. In the literature, so far only the case of a continuous drift (resp. of a drift with one finite jump) was treated. The theoretical method we give allows to deal with any finite number of discontinuities. Then we focus on the case of two jumps, using the transition densities of the two-skew Brownian motion obtained before. Various examples are presented and the efficiency of our approach is discussed.
The present paper is intended to provide the basis for the study of weakly differentiable functions on rectifiable varifolds with locally bounded first variation. The concept proposed here is defined by means of integration-by-parts identities for certain compositions with smooth functions. In this class, the idea of zero boundary values is realised using the relative perimeter of superlevel sets. Results include a variety of Sobolev Poincare-type embeddings, embeddings into spaces of continuous and sometimes Holder-continuous functions, and point wise differentiability results both of approximate and integral type as well as coarea formulae. As a prerequisite for this study, decomposition properties of such varifolds and a relative isoperimetric inequality are established. Both involve a concept of distributional boundary of a set introduced for this purpose. As applications, the finiteness of the geodesic distance associated with varifolds with suitable summability of the mean curvature and a characterisation of curvature varifolds are obtained.
This paper introduces first-order Sobolev spaces on certain rectifiable varifolds. These complete locally convex spaces are contained in the generally non-linear class of generalised weakly differentiable functions and share key functional analytic properties with their Euclidean counterparts. Assuming the varifold to satisfy a uniform lower density bound and a dimensionally critical summability condition on its mean curvature, the following statements hold. Firstly, continuous and compact embeddings of Sobolev spaces into Lebesgue spaces and spaces of continuous functions are available. Secondly, the geodesic distance associated to the varifold is a continuous, not necessarily Holder continuous Sobolev function with bounded derivative. Thirdly, if the varifold additionally has bounded mean curvature and finite measure, then the present Sobolev spaces are isomorphic to those previously available for finite Radon measures yielding many new results for those classes as well. Suitable versions of the embedding results obtained for Sobolev functions hold in the larger class of generalised weakly differentiable functions.
When trying to extend the Hodge theory for elliptic complexes on compact closed manifolds to the case of compact manifolds with boundary one is led to a boundary value problem for
the Laplacian of the complex which is usually referred to as Neumann problem. We study the Neumann problem for a larger class of sequences of differential operators on
a compact manifold with boundary. These are sequences of small curvature, i.e., bearing the property that the composition of any two neighbouring operators has order less than two.