Refine
Year of publication
- 2017 (63) (remove)
Document Type
- Article (44)
- Doctoral Thesis (6)
- Other (4)
- Conference Proceeding (3)
- Preprint (3)
- Review (2)
- Part of a Book (1)
Is part of the Bibliography
- yes (63)
Keywords
- Cauchy problem (2)
- generators (2)
- APC concentration gradient (1)
- Absorption kinetics (1)
- Achievement goal orientation (1)
- Algebraic quantum field theory (1)
- Aluminium (1)
- Aluminium adjuvants (1)
- Analytic continuation (1)
- Analytic extension (1)
Institute
- Institut für Mathematik (63) (remove)
We consider the Cauchy problem for the heat equation in a cylinder C (T) = X x (0, T) over a domain X in R (n) , with data on a strip lying on the lateral surface. The strip is of the form S x (0, T), where S is an open subset of the boundary of X. The problem is ill-posed. Under natural restrictions on the configuration of S, we derive an explicit formula for solutions of this problem.
In this note, we consider the semigroup O(X) of all order endomorphisms of an infinite chain X and the subset J of O(X) of all transformations alpha such that vertical bar Im(alpha)vertical bar = vertical bar X vertical bar. For an infinite countable chain X, we give a necessary and sufficient condition on X for O(X) = < J > to hold. We also present a sufficient condition on X for O(X) = < J > to hold, for an arbitrary infinite chain X.
We prove that if u is a locally Lipschitz continuous function on an open set chi subset of Rn + 1 satisfying the nonlinear heat equation partial derivative(t)u = Delta(vertical bar u vertical bar(p-1) u), p > 1, weakly away from the zero set u(-1) (0) in chi, then u is a weak solution to this equation in all of chi.
In a bounded domain with smooth boundary in R^3 we consider the stationary Maxwell equations
for a function u with values in R^3 subject to a nonhomogeneous condition
(u,v)_x = u_0 on
the boundary, where v is a given vector field and u_0 a function on the boundary. We specify this problem within the framework of the Riemann-Hilbert boundary value problems for the Moisil-Teodorescu system. This latter is proved to satisfy the Shapiro-Lopaniskij condition if an only if the vector v is at no point tangent to the boundary. The Riemann-Hilbert problem for the Moisil-Teodorescu system fails to possess an adjoint boundary value problem with respect to the Green formula, which satisfies the Shapiro-Lopatinskij condition. We develop the construction of Green formula to get a proper concept of adjoint boundary value problem.
We equip the space of lattice cones with a coproduct which makes it a cograded, coaugmented, connnected coalgebra. The exponential generating sum and exponential generating integral on lattice cones can be viewed as linear maps on this space with values in the space of meromorphic germs with linear poles at zero. We investigate the subdivision properties-reminiscent of the inclusion-exclusion principle for the cardinal on finite sets-of such linear maps and show that these properties are compatible with the convolution quotient of maps on the coalgebra. Implementing the algebraic Birkhoff factorization procedure on the linear maps under consideration, we factorize the exponential generating sum as a convolution quotient of two maps, with each of the maps in the factorization satisfying a subdivision property. A direct computation shows that the polar decomposition of the exponential generating sum on a smooth lattice cone yields an Euler-Maclaurin formula. The compatibility with subdivisions of the convolution quotient arising in the algebraic Birkhoff factorization then yields the Euler-Maclaurin formula for any lattice cone. This provides a simple formula for the interpolating factor by means of a projection formula.
Assimilation of pseudo-tree-ring-width observations into an atmospheric general circulation model
(2017)
Paleoclimate data assimilation (DA) is a promising technique to systematically combine the information from climate model simulations and proxy records. Here, we investigate the assimilation of tree-ring-width (TRW) chronologies into an atmospheric global climate model using ensemble Kalman filter (EnKF) techniques and a process-based tree-growth forward model as an observation operator. Our results, within a perfect-model experiment setting, indicate that the "online DA" approach did not outperform the "off-line" one, despite its considerable additional implementation complexity. On the other hand, it was observed that the nonlinear response of tree growth to surface temperature and soil moisture does deteriorate the operation of the time-averaged EnKF methodology. Moreover, for the first time we show that this skill loss appears significantly sensitive to the structure of the growth rate function, used to represent the principle of limiting factors (PLF) within the forward model. In general, our experiments showed that the error reduction achieved by assimilating pseudo-TRW chronologies is modulated by the magnitude of the yearly internal variability in themodel. This result might help the dendrochronology community to optimize their sampling efforts.
We establish a calculus of boundary value problems (BVPs) on a manifold N with boundary and edge, based on Boutet de Monvel’s theory of BVPs in the case of a smooth boundary and on the edge calculus, where in the present case the model cone has a base which is a compact manifold with boundary. The corresponding calculus with boundary and edge is a unification of both structures and controls different operator-valued symbolic structures, in order to obtain ellipticity and parametrices.
This paper concerns integral varifolds of arbitrary dimension in an open subset of Euclidean space satisfying integrability conditions on their first variation. Firstly, the study of pointwise power decay rates almost everywhere of the quadratic tilt-excess is completed by establishing the precise decay rate for two-dimensional integral varifolds of locally bounded first variation. In order to obtain the exact decay rate, a coercive estimate involving a height-excess quantity measured in Orlicz spaces is established. Moreover, counter-examples to pointwise power decay rates almost everywhere of the super-quadratic tilt-excess are obtained. These examples are optimal in terms of the dimension of the varifold and the exponent of the integrability condition in most cases, for example if the varifold is not two-dimensional. These examples also demonstrate that within the scale of Lebesgue spaces no local higher integrability of the second fundamental form, of an at least two-dimensional curvature varifold, may be deduced from boundedness of its generalised mean curvature vector. Amongst the tools are Cartesian products of curvature varifolds.
We study differential cohomology on categories of globally hyperbolic Lorentzian manifolds. The Lorentzian metric allows us to define a natural transformation whose kernel generalizes Maxwell's equations and fits into a restriction of the fundamental exact sequences of differential cohomology. We consider smooth Pontryagin duals of differential cohomology groups, which are subgroups of the character groups. We prove that these groups fit into smooth duals of the fundamental exact sequences of differential cohomology and equip them with a natural presymplectic structure derived from a generalized Maxwell Lagrangian. The resulting presymplectic Abelian groups are quantized using the CCR-functor, which yields a covariant functor from our categories of globally hyperbolic Lorentzian manifolds to the category of C∗-algebras. We prove that this functor satisfies the causality and time-slice axioms of locally covariant quantum field theory, but that it violates the locality axiom. We show that this violation is precisely due to the fact that our functor has topological subfunctors describing the Pontryagin duals of certain singular cohomology groups. As a byproduct, we develop a Fréchet–Lie group structure on differential cohomology groups.
We analyze an inverse noisy regression model under random design with the aim of estimating the unknown target function based on a given set of data, drawn according to some unknown probability distribution. Our estimators are all constructed by kernel methods, which depend on a Reproducing Kernel Hilbert Space structure using spectral regularization methods.
A first main result establishes upper and lower bounds for the rate of convergence under a given source condition assumption, restricting the class of admissible distributions. But since kernel methods scale poorly when massive datasets are involved, we study one example for saving computation time and memory requirements in more detail. We show that Parallelizing spectral algorithms also leads to minimax optimal rates of convergence provided the number of machines is chosen appropriately.
We emphasize that so far all estimators depend on the assumed a-priori smoothness of the target function and on the eigenvalue decay of the kernel covariance operator, which are in general unknown. To obtain good purely data driven estimators constitutes the problem of adaptivity which we handle for the single machine problem via a version of the Lepskii principle.
We study corner-degenerate pseudo-differential operators of any singularity order and develop ellipticity based on the principal symbolic hierarchy, associated with the stratification of the underlying space. We construct parametrices within the calculus and discuss the aspect of additional trace and potential conditions along lower-dimensional strata.
Entdeckendes Lernen
(2017)
Trotz der nachweislichen Popularität des Entdeckenden Lernens in der deutschsprachigen Mathematikdidaktik finden sich aktuell keine kritischen Beiträge, die dazu beitragen könnten, dieses grundlegende Unterrichtskonzept zu hinterfragen und auszuschärfen. In diesem Diskussionsbeitrag werden zunächst die Theorie und einige Umsetzungsbeispiele des Entdeckenden Lernens herausgearbeitet, um aufzuzeigen, dass das Entdeckende Lernen einem vagen Sammelbegriff gleicht, unter dem oft fragwürdige Unterrichtsumgebungen legitimiert werden. Anschließend werden an Hand erkenntnistheoretischer, lerntheoretischer, didaktischer und soziokultureller Betrachtungen Probleme des Entdeckenden Lernens im Mathematikunterricht und Möglichkeiten ihrer Überwindung thematisiert. Dabei zeigt sich, dass die Konzeption des Entdeckenden Lernens hinter dem aktuellen mathematikdidaktischen Erkenntnisstand zurückfällt und Lehrer sowie Schüler mit unmöglichen Forderungen konfrontiert, dass lerntheoretische Vorteile des Entdeckenden Lernens oft nicht nachweisbar sind, dass die Idee des Entdeckens auf einem problematischen platonistischen Verständnis von Erkenntnis beruht und dass Entdeckendes Lernen bildungsferne Schüler zu benachteiligen droht. Abschließend werden Forschungsdesiderata abgeleitet, deren Bearbeitung dazu beitragen könnte, die aufgezeigten Problemfelder zu überwinden.
S-test results for the USGS and RELM forecasts. The differences between the simulated log-likelihoods and the observed log-likelihood are labelled on the horizontal axes, with scaling adjustments for the 40year.retro experiment. The horizontal lines represent the confidence intervals, within the 0.05 significance level, for each forecast and experiment. If this range contains a log-likelihood difference of zero, the forecasted log-likelihoods are consistent with the observed, and the forecast passes the S-test (denoted by thin lines). If the minimum difference within this range does not contain zero, the forecast fails the S-test for that particular experiment, denoted by thick lines. Colours distinguish between experiments (see Table 2 for explanation of experiment durations). Due to anomalously large likelihood differences, S-test results for Wiemer-Schorlemmer.ALM during the 10year.retro and 40year.retro experiments are not displayed. The range of log-likelihoods for the Holliday-et-al.PI forecast is lower than for the other forecasts due to relatively homogeneous forecasted seismicity rates and use of a small fraction of the RELM testing region.
In this paper, using an algorithm based on the retrospective rejection sampling scheme introduced in [A. Beskos, O. Papaspiliopoulos, and G. O. Roberts,Methodol. Comput. Appl. Probab., 10 (2008), pp. 85-104] and [P. Etore and M. Martinez, ESAIM Probab.Stat., 18 (2014), pp. 686-702], we propose an exact simulation of a Brownian di ff usion whose drift admits several jumps. We treat explicitly and extensively the case of two jumps, providing numerical simulations. Our main contribution is to manage the technical di ffi culty due to the presence of t w o jumps thanks to a new explicit expression of the transition density of the skew Brownian motion with two semipermeable barriers and a constant drift.
Ancient genomes have revolutionized our understanding of Holocene prehistory and, particularly, the Neolithic transition in western Eurasia. In contrast, East Asia has so far received little attention, despite representing a core region at which the Neolithic transition took place independently ~3 millennia after its onset in the Near East. We report genome-wide data from two hunter-gatherers from Devil’s Gate, an early Neolithic cave site (dated to ~7.7 thousand years ago) located in East Asia, on the border between Russia and Korea. Both of these individuals are genetically most similar to geographically close modern populations from the Amur Basin, all speaking Tungusic languages, and, in particular, to the Ulchi. The similarity to nearby modern populations and the low levels of additional genetic material in the Ulchi imply a high level of genetic continuity in this region during the Holocene, a pattern that markedly contrasts with that reported for Europe.
This is a brief survey of a constructive technique of analytic continuation related to an explicit integral formula of Golusin and Krylov (1933). It goes far beyond complex analysis and applies to the Cauchy problem for elliptic partial differential equations as well. As started in the classical papers, the technique is elaborated in generalised Hardy spaces also called Hardy-Smirnov spaces.
This is a brief survey of a constructive technique of analytic continuation related to an explicit integral formula of Golusin and Krylov (1933). It goes far beyond complex analysis and applies to the Cauchy problem for elliptic partial differential equations as well. As started in the classical papers, the technique is elaborated in generalised Hardy spaces also called Hardy-Smirnov spaces.
Abelian duality is realized naturally by combining differential cohomology and locally covariant quantum field theory. This leads to a -algebra of observables, which encompasses the simultaneous discretization of both magnetic and electric fluxes. We discuss the assignment of physically well-behaved states on this algebra and the properties of the associated GNS triple. We show that the algebra of observables factorizes as a suitable tensor product of three -algebras: the first factor encodes dynamical information, while the other two capture topological data corresponding to electric and magnetic fluxes. On the former factor and in the case of ultra-static globally hyperbolic spacetimes with compact Cauchy surfaces, we exhibit a state whose two-point correlation function has the same singular structure of a Hadamard state. Specifying suitable counterparts also on the topological factors, we obtain a state for the full theory, ultimately implementing Abelian duality transformations as Hilbert space isomorphisms.
Mental arithmetic is characterised by a tendency to overestimate addition and to underestimate subtraction results: the operational momentum (OM) effect. Here, motivated by contentious explanations of this effect, we developed and tested an arithmetic heuristics and biases model that predicts reverse OM due to cognitive anchoring effects. Participants produced bi-directional lines with lengths corresponding to the results of arithmetic problems. In two experiments, we found regular OM with zero problems (e.g., 3+0, 3-0) but reverse OM with non-zero problems (e.g., 2+1, 4-1). In a third experiment, we tested the prediction of our model. Our results suggest the presence of at least three competing biases in mental arithmetic: a more-or-less heuristic, a sign-space association and an anchoring bias. We conclude that mental arithmetic exhibits shortcuts for decision-making similar to traditional domains of reasoning and problem-solving.
The first main goal of this thesis is to develop a concept of approximate differentiability of higher order for subsets of the Euclidean space that allows to characterize higher order rectifiable sets, extending somehow well known facts for functions. We emphasize that for every subset A of the Euclidean space and for every integer k ≥ 2 we introduce the approximate differential of order k of A and we prove it is a Borel map whose domain is a (possibly empty) Borel set. This concept could be helpful to deal with higher order rectifiable sets in applications.
The other goal is to extend to general closed sets a well known theorem of Alberti on the second order rectifiability properties of the boundary of convex bodies. The Alberti theorem provides a stratification of second order rectifiable subsets of the boundary of a convex body based on the dimension of the (convex) normal cone. Considering a suitable generalization of this normal cone for general closed subsets of the Euclidean space and employing some results from the first part we can prove that the same stratification exists for every closed set.
This article presents a new and easily implementable method to quantify the so-called coupling distance between the law of a time series and the law of a differential equation driven by Markovian additive jump noise with heavy-tailed jumps, such as a-stable Levy flights. Coupling distances measure the proximity of the empirical law of the tails of the jump increments and a given power law distribution. In particular, they yield an upper bound for the distance of the respective laws on path space. We prove rates of convergence comparable to the rates of the central limit theorem which are confirmed by numerical simulations. Our method applied to a paleoclimate time series of glacial climate variability confirms its heavy tail behavior. In addition, this approach gives evidence for heavy tails in datasets of precipitable water vapor of the Western Tropical Pacific. Published by AIP Publishing.
The global prevalence of rapid and extensive land use change necessitates hydrologic modelling methodologies capable of handling non-stationarity. This is particularly true in the context of Hydrologic Forecasting using Data Assimilation. Data Assimilation has been shown to dramatically improve forecast skill in hydrologic and meteorological applications, although such improvements are conditional on using bias-free observations and model simulations. A hydrologic model calibrated to a particular set of land cover conditions has the potential to produce biased simulations when the catchment is disturbed. This paper sheds new light on the impacts of bias or systematic errors in hydrologic data assimilation, in the context of forecasting in catchments with changing land surface conditions and a model calibrated to pre-change conditions. We posit that in such cases, the impact of systematic model errors on assimilation or forecast quality is dependent on the inherent prediction uncertainty that persists even in pre-change conditions. Through experiments on a range of catchments, we develop a conceptual relationship between total prediction uncertainty and the impacts of land cover changes on the hydrologic regime to demonstrate how forecast quality is affected when using state estimation Data Assimilation with no modifications to account for land cover changes. This work shows that systematic model errors as a result of changing or changed catchment conditions do not always necessitate adjustments to the modelling or assimilation methodology, for instance through re-calibration of the hydrologic model, time varying model parameters or revised offline/online bias estimation.
This longitudinal study examined relationships between student-perceived teaching for meaning, support for autonomy, and competence in mathematic classrooms (Time 1), and students’ achievement goal orientations and engagement in mathematics 6 months later (Time 2). We tested whether student-perceived instructional characteristics at Time 1 indirectly related to student engagement at Time 2, via their achievement goal orientations (Time 2), and, whether student gender moderated these relationships. Participants were ninth and tenth graders (55.2% girls) from 46 classrooms in ten secondary schools in Berlin, Germany. Only data from students who participated at both timepoints were included (N = 746 out of total at Time 1 1118; dropout 33.27%). Longitudinal structural equation modeling showed that student-perceived teaching for meaning and support for competence indirectly predicted intrinsic motivation and effort, via students’ mastery goal orientation. These paths were equivalent for girls and boys. The findings are significant for mathematics education, in identifying motivational processes that partly explain the relationships between student-perceived teaching for meaning and competence support and intrinsic motivation and effort in mathematics.
Integral Fourier operators
(2017)
This volume of contributions based on lectures delivered at a school on Fourier Integral Operators
held in Ouagadougou, Burkina Faso, 14–26 September 2015, provides an introduction to Fourier Integral Operators (FIO) for a readership of Master and PhD students as well as any interested layperson. Considering the wide
spectrum of their applications and the richness of the mathematical tools they involve, FIOs lie the cross-road of many a field. This volume offers
the necessary background, whether analytic or geometric, to get acquainted with FIOs, complemented by more advanced material presenting various aspects of active research in that area.
During the drug discovery & development process, several phases encompassing a number of preclinical and clinical studies have to be successfully passed to demonstrate safety and efficacy of a new drug candidate. As part of these studies, the characterization of the drug's pharmacokinetics (PK) is an important aspect, since the PK is assumed to strongly impact safety and efficacy. To this end, drug concentrations are measured repeatedly over time in a study population. The objectives of such studies are to describe the typical PK time-course and the associated variability between subjects. Furthermore, underlying sources significantly contributing to this variability, e.g. the use of comedication, should be identified. The most commonly used statistical framework to analyse repeated measurement data is the nonlinear mixed effect (NLME) approach. At the same time, ample knowledge about the drug's properties already exists and has been accumulating during the discovery & development process: Before any drug is tested in humans, detailed knowledge about the PK in different animal species has to be collected. This drug-specific knowledge and general knowledge about the species' physiology is exploited in mechanistic physiological based PK (PBPK) modeling approaches -it is, however, ignored in the classical NLME modeling approach.
Mechanistic physiological based models aim to incorporate relevant and known physiological processes which contribute to the overlying process of interest. In comparison to data--driven models they are usually more complex from a mathematical perspective. For example, in many situations, the number of model parameters outrange the number of measurements and thus reliable parameter estimation becomes more complex and partly impossible. As a consequence, the integration of powerful mathematical estimation approaches like the NLME modeling approach -which is widely used in data-driven modeling -and the mechanistic modeling approach is not well established; the observed data is rather used as a confirming instead of a model informing and building input.
Another aggravating circumstance of an integrated approach is the inaccessibility to the details of the NLME methodology so that these approaches can be adapted to the specifics and needs of mechanistic modeling. Despite the fact that the NLME modeling approach exists for several decades, details of the mathematical methodology is scattered around a wide range of literature and a comprehensive, rigorous derivation is lacking. Available literature usually only covers selected parts of the mathematical methodology. Sometimes, important steps are not described or are only heuristically motivated, e.g. the iterative algorithm to finally determine the parameter estimates.
Thus, in the present thesis the mathematical methodology of NLME modeling is systemically described and complemented to a comprehensive description,
comprising the common theme from ideas and motivation to the final parameter estimation. Therein, new insights for the interpretation of different approximation methods used in the context of the NLME modeling approach are given and illustrated; furthermore, similarities and differences between them are outlined. Based on these findings, an expectation-maximization (EM) algorithm to determine estimates of a NLME model is described.
Using the EM algorithm and the lumping methodology by Pilari2010, a new approach on how PBPK and NLME modeling can be combined is presented and exemplified for the antibiotic levofloxacin. Therein, the lumping identifies which processes are informed by the available data and the respective model reduction improves the robustness in parameter estimation. Furthermore, it is shown how apriori known factors influencing the variability and apriori known unexplained variability is incorporated to further mechanistically drive the model development. Concludingly, correlation between parameters and between covariates is automatically accounted for due to the mechanistic derivation of the lumping and the covariate relationships.
A useful feature of PBPK models compared to classical data-driven PK models is in the possibility to predict drug concentration within all organs and tissue in the body. Thus, the resulting PBPK model for levofloxacin is used to predict drug concentrations and their variability within soft tissues which are the site of action for levofloxacin. These predictions are compared with data of muscle and adipose tissue obtained by microdialysis, which is an invasive technique to measure a proportion of drug in the tissue, allowing to approximate the concentrations in the interstitial fluid of tissues. Because, so far, comparing human in vivo tissue PK and PBPK predictions are not established, a new conceptual framework is derived. The comparison of PBPK model predictions and microdialysis measurements shows an adequate agreement and reveals further strengths of the presented new approach.
We demonstrated how mechanistic PBPK models, which are usually developed in the early stage of drug development, can be used as basis for model building in the analysis of later stages, i.e. in clinical studies. As a consequence, the extensively collected and accumulated knowledge about species and drug are utilized and updated with specific volunteer or patient data. The NLME approach combined with mechanistic modeling reveals new insights for the mechanistic model, for example identification and quantification of variability in mechanistic processes. This represents a further contribution to the learn & confirm paradigm across different stages of drug development.
Finally, the applicability of mechanism--driven model development is demonstrated on an example from the field of Quantitative Psycholinguistics to analyse repeated eye movement data. Our approach gives new insight into the interpretation of these experiments and the processes behind.
In this paper we present a Bayesian framework for interpolating data in a reproducing kernel Hilbert space associated with a random subdivision scheme, where not only approximations of the values of a function at some missing points can be obtained, but also uncertainty estimates for such predicted values. This random scheme generalizes the usual subdivision by taking into account, at each level, some uncertainty given in terms of suitably scaled noise sequences of i.i.d. Gaussian random variables with zero mean and given variance, and generating, in the limit, a Gaussian process whose correlation structure is characterized and used for computing realizations of the conditional posterior distribution. The hierarchical nature of the procedure may be exploited to reduce the computational cost compared to standard techniques in the case where many prediction points need to be considered.
This paper is concerned with the filtering problem in continuous time. Three algorithmic solution approaches for this problem are reviewed: (i) the classical Kalman-Bucy filter, which provides an exact solution for the linear Gaussian problem; (ii) the ensemble Kalman-Bucy filter (EnKBF), which is an approximate filter and represents an extension of the Kalman-Bucy filter to nonlinear problems; and (iii) the feedback particle filter (FPF), which represents an extension of the EnKBF and furthermore provides for a consistent solution in the general nonlinear, non-Gaussian case. The common feature of the three algorithms is the gain times error formula to implement the update step (to account for conditioning due to the observations) in the filter. In contrast to the commonly used sequential Monte Carlo methods, the EnKBF and FPF avoid the resampling of the particles in the importance sampling update step. Moreover, the feedback control structure provides for error correction potentially leading to smaller simulation variance and improved stability properties. The paper also discusses the issue of nonuniqueness of the filter update formula and formulates a novel approximation algorithm based on ideas from optimal transport and coupling of measures. Performance of this and other algorithms is illustrated for a numerical example.
Maximal subsemigroups of some semigroups of order-preserving mappings on a countably infinite set
(2017)
In this paper, we study the maximal subsemigroups of several semigroups of order-preserving transformations on the natural numbers and the integers, respectively. We determine all maximal subsemigroups of the monoid of all order-preserving injections on the set of natural numbers as well as on the set of integers. Further, we give all maximal subsemigroups of the monoid of all bijections on the integers. For the monoid of all order-preserving transformations on the natural numbers, we classify also all its maximal subsemigroups, containing a particular set of transformations.
In this study, we investigate the climatology of high-latitude total electron content (TEC) variations as observed by the dual-frequency Global Navigation Satellite Systems (GNSS) receivers onboard the Swarm satellite constellation. The distribution of TEC perturbations as a function of geographic/magnetic coordinates and seasons reasonably agrees with that of the Challenging Minisatellite Payload observations published earlier. Categorizing the high-latitude TEC perturbations according to line-of-sight directions between Swarm and GNSS satellites, we can deduce their morphology with respect to the geomagnetic field lines. In the Northern Hemisphere, the perturbation shapes are mostly aligned with the L shell surface, and this anisotropy is strongest in the nightside auroral (substorm) and subauroral regions and weakest in the central polar cap. The results are consistent with the well-known two-cell plasma convection pattern of the high-latitude ionosphere, which is approximately aligned with L shells at auroral regions and crossing different L shells for a significant part of the polar cap. In the Southern Hemisphere, the perturbation structures exhibit noticeable misalignment to the local L shells. Here the direction toward the Sun has an additional influence on the plasma structure, which we attribute to photoionization effects. The larger offset between geographic and geomagnetic poles in the south than in the north is responsible for the hemispheric difference.
We give a new and very short proof of a theorem of Greiner asserting that a positive and contractive -semigroup on an -space is strongly convergent in case it has a strictly positive fixed point and contains an integral operator. Our proof is a streamlined version of a much more general approach to the asymptotic theory of positive semigroups developed recently by the authors. Under the assumptions of Greiner's theorem, this approach becomes particularly elegant and simple. We also give an outlook on several generalisations of this result.
For an arbitrary euclidean field F we introduce a central extension (G(F), Phi) of SL(2, F) admitting a left-ordering and study its algebraic properties. The elements of G(F) are order preserving bijections of the convex hull of Q in F. If F = R then G(F) is isomorphic to the classical universal covering group of the Lie group SL(2, R). Among other results we show that G(F) is a perfect group which possesses a rank 1 cone of exceptional type. We also prove that its centre is an infinite cyclic group and investigate its normal subgroups.
From monthly mean observatory data spanning 1957-2014, geomagnetic field secular variation values were calculated by annual differences. Estimates of the spherical harmonic Gauss coefficients of the core field secular variation were then derived by applying a correlation based modelling. Finally, a Fourier transform was applied to the time series of the Gauss coefficients. This process led to reliable temporal spectra of the Gauss coefficients up to spherical harmonic degree 5 or 6, and down to periods as short as 1 or 2 years depending on the coefficient. We observed that a k(-2) slope, where k is the frequency, is an acceptable approximation for these spectra, with possibly an exception for the dipole field. The monthly estimates of the core field secular variation at the observatory sites also show that large and rapid variations of the latter happen. This is an indication that geomagnetic jerks are frequent phenomena and that significant secular variation signals at short time scales - i.e. less than 2 years, could still be extracted from data to reveal an unexplored part of the core dynamics.
For n∈N , let Xn={a1,a2,…,an} be an n-element set and let F=(Xn;<f) be a fence, also called a zigzag poset. As usual, we denote by In the symmetric inverse semigroup on Xn. We say that a transformation α∈In is fence-preserving if x<fy implies that xα<fyα, for all x,y in the domain of α. In this paper, we study the semigroup PFIn of all partial fence-preserving injections of Xn and its subsemigroup IFn={α∈PFIn:α−1∈PFIn}. Clearly, IFn is an inverse semigroup and contains all regular elements of PFIn. We characterize the Green’s relations for the semigroup IFn. Further, we prove that the semigroup IFn is generated by its elements with rank ≥n−2. Moreover, for n∈2N, we find the least generating set and calculate the rank of IFn.
Background: Infliximab (IFX), an anti-TNF monoclonal antibody approved for the treatment of inflammatory bowel disease, is dosed per kg body weight (BW). However, the rationale for body size adjustment has not been unequivocally demonstrated [1], and first attempts to improve IFX therapy have been undertaken [2]. The aim of our study was to assess the impact of different dosing strategies (i.e. body size-adjusted and fixed dosing) on drug exposure and pharmacokinetic (PK) target attainment. For this purpose, a comprehensive simulation study was performed, using patient characteristics (n=116) from an in-house clinical database.
Methods: IFX concentration-time profiles of 1000 virtual, clinically representative patients were generated using a previously published PK model for IFX in patients with Crohn's disease [3]. For each patient 1000 profiles accounting for PK variability were considered. The IFX exposure during maintenance treatment after the following dosing strategies was compared: i) fixed dose, and per ii) BW, iii) lean BW (LBW), iv) body surface area (BSA), v) height (HT), vi) body mass index (BMI) and vii) fat-free mass (FFM)). For each dosing strategy the variability in maximum concentration Cmax, minimum concentration Cmin (= C8weeks) and area under the concentration-time curve (AUC), as well as percent of patients achieving the PK target, Cmin=3 μg/mL [4] were assessed.
Results: For all dosing strategies the variability of Cmin (CV ≈110%) was highest, compared to Cmax and AUC, and was of similar extent regardless of dosing strategy. The proportion of patients reaching the PK target (≈⅓ was approximately equal for all dosing strategies.
Background: Cells are able to communicate and coordinate their function within tissues via secreted factors. Aberrant secretion by cancer cells can modulate this intercellular communication, in particular in highly organised tissues such as the liver. Hepatocytes, the major cell type of the liver, secrete Dickkopf (Dkk), which inhibits Wnt/beta-catenin signalling in an autocrine and paracrine manner. Consequently, Dkk modulates the expression of Wnt/beta-catenin target genes. We present a mathematical model that describes the autocrine and paracrine regulation of hepatic gene expression by Dkk under wild-type conditions as well as in the presence of mutant cells. Results: Our spatial model describes the competition of Dkk and Wnt at receptor level, intra-cellular Wnt/beta-catenin signalling, and the regulation of target gene expression for 21 individual hepatocytes. Autocrine and paracrine regulation is mediated through a feedback mechanism via Dkk and Dkk diffusion along the porto-central axis. Along this axis an APC concentration gradient is modelled as experimentally detected in liver. Simulations of mutant cells demonstrate that already a single mutant cell increases overall Dkk concentration. The influence of the mutant cell on gene expression of surrounding wild-type hepatocytes is limited in magnitude and restricted to hepatocytes in close proximity. To explore the underlying molecular mechanisms, we perform a comprehensive analysis of the model parameters such as diffusion coefficient, mutation strength and feedback strength. Conclusions: Our simulations show that Dkk concentration is elevated in the presence of a mutant cell. However, the impact of these elevated Dkk levels on wild-type hepatocytes is confined in space and magnitude. The combination of inter-and intracellular processes, such as Dkk feedback, diffusion and Wnt/beta-catenin signal transduction, allow wild-type hepatocytes to largely maintain their gene expression.
We establish in this paper the existence of weak solutions of infinite-dimensional shift invariant stochastic differential equations driven by a Brownian term. The drift function is very general, in the sense that it is supposed to be neither bounded or continuous, nor Markov. On the initial law we only assume that it admits a finite specific entropy and a finite second moment.
The originality of our method leads in the use of the specific entropy as a tightness tool and in the description of such infinite-dimensional stochastic process as solution of a variational problem on the path space. Our result clearly improves previous ones obtained for free dynamics with bounded drift.
Audience Response Systeme (ARS) stellen eine Ergänzung der Hochschullehre dar, um die Teilnehmeraktivierung zu stärken und die Studierenden unmittelbar in das Vorlesungsgeschehen einzubinden. Es existiert eine Fülle an Lösungen, die entweder ohne dedizierte Hardware auskommen (sogenannte Software-Clicker) oder die Anschaffung meist kommerzieller Hardware- Lösungen voraussetzen. An dieser Stelle versucht Hands. UP eine integrative Brücke zu schlagen. Auf Grundlage einer Kosten- und Aufwandsschätzung ausgewählter ARS-Lösungen soll die Notwendigkeit hochschulübergreifender Kooperationen hinsichtlich einer adäquate Weiterentwicklung und des Einsatzes von ARS in der Lehre motiviert werden.
Prospective and retrospective evaluation of five-year earthquake forecast models for California
(2017)
We introduce an abstract concept of quantum field theory on categories fibered in groupoids over the category of spacetimes. This provides us with a general and flexible framework to study quantum field theories defined on spacetimes with extra geometric structures such as bundles, connections and spin structures. Using right Kan extensions, we can assign to any such theory an ordinary quantum field theory defined on the category of spacetimes and we shall clarify under which conditions it satisfies the axioms of locally covariant quantum field theory. The same constructions can be performed in a homotopy theoretic framework by using homotopy right Kan extensions, which allows us to obtain first toy-models of homotopical quantum field theories resembling some aspects of gauge theories.
Raum und Form
(2017)
We show a connection between the CDE′ inequality introduced in Horn et al. (Volume doubling, Poincaré inequality and Gaussian heat kernel estimate for nonnegative curvature graphs. arXiv:1411.5087v2, 2014) and the CDψ inequality established in Münch (Li–Yau inequality on finite graphs via non-linear curvature dimension conditions. arXiv:1412.3340v1, 2014). In particular, we introduce a CDφψ inequality as a slight generalization of CDψ which turns out to be equivalent to CDE′ with appropriate choices of φ and ψ. We use this to prove that the CDE′ inequality implies the classical CD inequality on graphs, and that the CDE′ inequality with curvature bound zero holds on Ricci-flat graphs.
We prove that the Atiyah–Singer Dirac operator in L2 depends Riesz continuously on L∞ perturbations of complete metrics g on a smooth manifold. The Lipschitz bound for the map depends on bounds on Ricci curvature and its first derivatives as well as a lower bound on injectivity radius. Our proof uses harmonic analysis techniques related to Calderón’s first commutator and the Kato square root problem. We also show perturbation results for more general functions of general Dirac-type operators on vector bundles.
Particle filters (also called sequential Monte Carlo methods) are widely used for state and parameter estimation problems in the context of nonlinear evolution equations. The recently proposed ensemble transform particle filter (ETPF) [S. Reich, SIAM T. Sci. Comput., 35, (2013), pp. A2013-A2014[ replaces the resampling step of a standard particle filter by a linear transformation which allows for a hybridization of particle filters with ensemble Kalman filters and renders the resulting hybrid filters applicable to spatially extended systems. However, the linear transformation step is computationally expensive and leads to an underestimation of the ensemble spread for small and moderate ensemble sizes. Here we address both of these shortcomings by developing second order accurate extensions of the ETPF. These extensions allow one in particular to replace the exact solution of a linear transport problem by its Sinkhorn approximation. It is also demonstrated that the nonlinear ensemble transform filter arises as a special case of our general framework. We illustrate the performance of the second-order accurate filters for the chaotic Lorenz-63 and Lorenz-96 models and a dynamic scene-viewing model. The numerical results for the Lorenz-63 and Lorenz-96 models demonstrate that significant accuracy improvements can be achieved in comparison to a standard ensemble Kalman filter and the ETPF for small to moderate ensemble sizes. The numerical results for the scene-viewing model reveal, on the other hand, that second-order corrections can lead to statistically inconsistent samples from the posterior parameter distribution.