TY - JOUR A1 - Menne, Ulrich T1 - Weakly Differentiable Functions on Varifolds JF - Indiana University mathematics journal N2 - The present paper is intended to provide the basis for the study of weakly differentiable functions on rectifiable varifolds with locally bounded first variation. The concept proposed here is defined by means of integration-by-parts identities for certain compositions with smooth functions. In this class, the idea of zero boundary values is realised using the relative perimeter of superlevel sets. Results include a variety of Sobolev Poincare-type embeddings, embeddings into spaces of continuous and sometimes Holder-continuous functions, and point wise differentiability results both of approximate and integral type as well as coarea formulae. As a prerequisite for this study, decomposition properties of such varifolds and a relative isoperimetric inequality are established. Both involve a concept of distributional boundary of a set introduced for this purpose. As applications, the finiteness of the geodesic distance associated with varifolds with suitable summability of the mean curvature and a characterisation of curvature varifolds are obtained. KW - Rectifiable varifold KW - (generalised) wealdy differentiable function KW - distributional boundary KW - decomposition KW - relative isoperimetric inequality KW - Sobolev Poincare inequality KW - approximate differentiability KW - coarea formula KW - geodesic distance KW - curvature varifold Y1 - 2016 U6 - https://doi.org/10.1512/iumj.2016.65.5829 SN - 0022-2518 SN - 1943-5258 VL - 65 SP - 977 EP - 1088 PB - Indiana University, Department of Mathematics CY - Bloomington ER - TY - INPR A1 - Alsaedy, Ammar T1 - Variational primitive of a differential form N2 - In this paper we specify the Dirichlet to Neumann operator related to the Cauchy problem for the gradient operator with data on a part of the boundary. To this end, we consider a nonlinear relaxation of this problem which is a mixed boundary problem of Zaremba type for the p-Laplace equation. T3 - Preprints des Instituts für Mathematik der Universität Potsdam - 5 (2016) 4 KW - Dirichlet-to-Neumann operator KW - Cauchy problem KW - p-Laplace operator KW - calculus of variations Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-89223 SN - 2193-6943 VL - 5 IS - 4 PB - Universitätsverlag Potsdam CY - Potsdam ER - TY - JOUR A1 - Kretschmer, Marlene A1 - Coumou, Dim A1 - Donges, Jonathan Friedemann A1 - Runge, Jakob T1 - Using Causal Effect Networks to Analyze Different Arctic Drivers of Midlatitude Winter Circulation JF - Journal of climate N2 - In recent years, the Northern Hemisphere midlatitudes have suffered from severe winters like the extreme 2012/13 winter in the eastern United States. These cold spells were linked to a meandering upper-tropospheric jet stream pattern and a negative Arctic Oscillation index (AO). However, the nature of the drivers behind these circulation patterns remains controversial. Various studies have proposed different mechanisms related to changes in the Arctic, most of them related to a reduction in sea ice concentrations or increasing Eurasian snow cover. Here, a novel type of time series analysis, called causal effect networks (CEN), based on graphical models is introduced to assess causal relationships and their time delays between different processes. The effect of different Arctic actors on winter circulation on weekly to monthly time scales is studied, and robust network patterns are found. Barents and Kara sea ice concentrations are detected to be important external drivers of the midlatitude circulation, influencing winter AO via tropospheric mechanisms and through processes involving the stratosphere. Eurasia snow cover is also detected to have a causal effect on sea level pressure in Asia, but its exact role on AO remains unclear. The CEN approach presented in this study overcomes some difficulties in interpreting correlation analyses, complements model experiments for testing hypotheses involving teleconnections, and can be used to assess their validity. The findings confirm that sea ice concentrations in autumn in the Barents and Kara Seas are an important driver of winter circulation in the midlatitudes. Y1 - 2016 U6 - https://doi.org/10.1175/JCLI-D-15-0654.1 SN - 0894-8755 SN - 1520-0442 VL - 29 SP - 4069 EP - 4081 PB - American Meteorological Soc. CY - Boston ER - TY - INPR A1 - Gairing, Jan A1 - Högele, Michael A1 - Kosenkova, Tetiana T1 - Transportation distances and noise sensitivity of multiplicative Lévy SDE with applications N2 - This article assesses the distance between the laws of stochastic differential equations with multiplicative Lévy noise on path space in terms of their characteristics. The notion of transportation distance on the set of Lévy kernels introduced by Kosenkova and Kulik yields a natural and statistically tractable upper bound on the noise sensitivity. This extends recent results for the additive case in terms of coupling distances to the multiplicative case. The strength of this notion is shown in a statistical implementation for simulations and the example of a benchmark time series in paleoclimate. T3 - Preprints des Instituts für Mathematik der Universität Potsdam - 5 (2016) 2 KW - stochastic differential equations KW - multiplicative Lévy noise KW - Lévy type processes KW - heavy-tailed distributions KW - model selection KW - Wasserstein distance KW - time series Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-86693 SN - 2193-6943 VL - 5 IS - 2 PB - Universitätsverlag Potsdam CY - Potsdam ER - TY - JOUR A1 - Acevedo, Walter A1 - Reich, Sebastian A1 - Cubasch, Ulrich T1 - Towards the assimilation of tree-ring-width records using ensemble Kalman filtering techniques JF - Climate dynamics : observational, theoretical and computational research on the climate system N2 - This paper investigates the applicability of the Vaganov–Shashkin–Lite (VSL) forward model for tree-ring-width chronologies as observation operator within a proxy data assimilation (DA) setting. Based on the principle of limiting factors, VSL combines temperature and moisture time series in a nonlinear fashion to obtain simulated TRW chronologies. When used as observation operator, this modelling approach implies three compounding, challenging features: (1) time averaging, (2) “switching recording” of 2 variables and (3) bounded response windows leading to “thresholded response”. We generate pseudo-TRW observations from a chaotic 2-scale dynamical system, used as a cartoon of the atmosphere-land system, and attempt to assimilate them via ensemble Kalman filtering techniques. Results within our simplified setting reveal that VSL’s nonlinearities may lead to considerable loss of assimilation skill, as compared to the utilization of a time-averaged (TA) linear observation operator. In order to understand this undesired effect, we embed VSL’s formulation into the framework of fuzzy logic (FL) theory, which thereby exposes multiple representations of the principle of limiting factors. DA experiments employing three alternative growth rate functions disclose a strong link between the lack of smoothness of the growth rate function and the loss of optimality in the estimate of the TA state. Accordingly, VSL’s performance as observation operator can be enhanced by resorting to smoother FL representations of the principle of limiting factors. This finding fosters new interpretations of tree-ring-growth limitation processes. KW - Proxy forward modeling KW - Data assimilation KW - Fuzzy logic KW - Ensemble Kalman filter KW - Paleoclimate reconstruction Y1 - 2016 U6 - https://doi.org/10.1007/s00382-015-2683-1 SN - 0930-7575 SN - 1432-0894 VL - 46 SP - 1909 EP - 1920 PB - Springer CY - New York ER - TY - JOUR A1 - Stolle, Claudia A1 - Michaelis, Ingo A1 - Rauberg, Jan T1 - The role of high-resolution geomagnetic field models for investigating ionospheric currents at low Earth orbit satellites JF - Earth, planets and space N2 - Low Earth orbiting geomagnetic satellite missions, such as the Swarm satellite mission, are the only means to monitor and investigate ionospheric currents on a global scale and to make in situ measurements of F region currents. High-precision geomagnetic satellite missions are also able to detect ionospheric currents during quiet-time geomagnetic conditions that only have few nanotesla amplitudes in the magnetic field. An efficient method to isolate the ionospheric signals from satellite magnetic field measurements has been the use of residuals between the observations and predictions from empirical geomagnetic models for other geomagnetic sources, such as the core and lithospheric field or signals from the quiet-time magnetospheric currents. This study aims at highlighting the importance of high-resolution magnetic field models that are able to predict the lithospheric field and that consider the quiet-time magnetosphere for reliably isolating signatures from ionospheric currents during geomagnetically quiet times. The effects on the detection of ionospheric currents arising from neglecting the lithospheric and magnetospheric sources are discussed on the example of four Swarm orbits during very quiet times. The respective orbits show a broad range of typical scenarios, such as strong and weak ionospheric signal (during day- and nighttime, respectively) superimposed over strong and weak lithospheric signals. If predictions from the lithosphere or magnetosphere are not properly considered, the amplitude of the ionospheric currents, such as the midlatitude Sq currents or the equatorial electrojet (EEJ), is modulated by 10-15 % in the examples shown. An analysis from several orbits above the African sector, where the lithospheric field is significant, showed that the peak value of the signatures of the EEJ is in error by 5 % in average when lithospheric contributions are not considered, which is in the range of uncertainties of present empirical models of the EEJ. KW - Geomagnetic field KW - Ionospheric current KW - Geomagnetic models Y1 - 2016 U6 - https://doi.org/10.1186/s40623-016-0494-1 SN - 1880-5981 VL - 68 PB - Springer CY - Heidelberg ER - TY - JOUR A1 - Denecke, Klaus-Dieter T1 - The partial clone of linear terms JF - Siberian Mathematical Journal N2 - Generalizing a linear expression over a vector space, we call a term of an arbitrary type tau linear if its every variable occurs only once. Instead of the usual superposition of terms and of the total many-sorted clone of all terms in the case of linear terms, we define the partial many-sorted superposition operation and the partial many-sorted clone that satisfies the superassociative law as weak identity. The extensions of linear hypersubstitutions are weak endomorphisms of this partial clone. For a variety V of one-sorted total algebras of type tau, we define the partial many-sorted linear clone of V as the partial quotient algebra of the partial many-sorted clone of all linear terms by the set of all linear identities of V. We prove then that weak identities of this clone correspond to linear hyperidentities of V. KW - linear term KW - clone KW - partial clone KW - linear hypersubstitution KW - linear identity KW - linear hyperidentity Y1 - 2016 U6 - https://doi.org/10.1134/S0037446616040030 SN - 0037-4466 SN - 1573-9260 VL - 57 SP - 589 EP - 598 PB - Pleiades Publ. CY - New York ER - TY - INPR A1 - Mera, Azal A1 - Tarkhanov, Nikolai Nikolaevich T1 - The Neumann problem after Spencer N2 - When trying to extend the Hodge theory for elliptic complexes on compact closed manifolds to the case of compact manifolds with boundary one is led to a boundary value problem for the Laplacian of the complex which is usually referred to as Neumann problem. We study the Neumann problem for a larger class of sequences of differential operators on a compact manifold with boundary. These are sequences of small curvature, i.e., bearing the property that the composition of any two neighbouring operators has order less than two. T3 - Preprints des Instituts für Mathematik der Universität Potsdam - 5 (2016) 6 KW - elliptic complex KW - manifold with boundary KW - Hodge theory KW - Neumann problem Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-90631 SN - 2193-6943 VL - 5 IS - 6 PB - Universitätsverlag Potsdam CY - Potsdam ER - TY - JOUR A1 - Zöller, Gert A1 - Holschneider, Matthias T1 - The Maximum Possible and the Maximum Expected Earthquake Magnitude for Production-Induced Earthquakes at the Gas Field in Groningen, The Netherlands JF - Bulletin of the Seismological Society of America N2 - The Groningen gas field serves as a natural laboratory for production-induced earthquakes, because no earthquakes were observed before the beginning of gas production. Increasing gas production rates resulted in growing earthquake activity and eventually in the occurrence of the 2012M(w) 3.6 Huizinge earthquake. At least since this event, a detailed seismic hazard and risk assessment including estimation of the maximum earthquake magnitude is considered to be necessary to decide on the future gas production. In this short note, we first apply state-of-the-art methods of mathematical statistics to derive confidence intervals for the maximum possible earthquake magnitude m(max). Second, we calculate the maximum expected magnitude M-T in the time between 2016 and 2024 for three assumed gas-production scenarios. Using broadly accepted physical assumptions and 90% confidence level, we suggest a value of m(max) 4.4, whereas M-T varies between 3.9 and 4.3, depending on the production scenario. Y1 - 2016 U6 - https://doi.org/10.1785/0120160220 SN - 0037-1106 SN - 1943-3573 VL - 106 SP - 2917 EP - 2921 PB - Seismological Society of America CY - Albany ER - TY - JOUR A1 - Kistner, Saskia A1 - Burns, Bruce D. A1 - Vollmeyer, Regina A1 - Kortenkamp, Ulrich T1 - The importance of understanding: Model space moderates goal specificity effects JF - The quarterly journal of experimental psychology N2 - The three-space theory of problem solving predicts that the quality of a learner's model and the goal specificity of a task interact on knowledge acquisition. In Experiment 1 participants used a computer simulation of a lever system to learn about torques. They either had to test hypotheses (nonspecific goal), or to produce given values for variables (specific goal). In the good- but not in the poor-model condition they saw torque depicted as an area. Results revealed the predicted interaction. A nonspecific goal only resulted in better learning when a good model of torques was provided. In Experiment 2 participants learned to manipulate the inputs of a system to control its outputs. A nonspecific goal to explore the system helped performance when compared to a specific goal to reach certain values when participants were given a good model, but not when given a poor model that suggested the wrong hypothesis space. Our findings support the three-space theory. They emphasize the importance of understanding for problem solving and stress the need to study underlying processes. KW - Goal specificity KW - Problem solving KW - Three-space theory KW - Scientific discovery learning Y1 - 2016 U6 - https://doi.org/10.1080/17470218.2015.1076865 SN - 1747-0218 SN - 1747-0226 VL - 69 SP - 1179 EP - 1196 PB - Optical Society of America CY - Abingdon ER - TY - JOUR A1 - Zöller, Gert A1 - Holschneider, Matthias T1 - The Earthquake History in a Fault Zone Tells Us Almost Nothing about m(max) JF - Seismological research letters N2 - In the present study, we summarize and evaluate the endeavors from recent years to estimate the maximum possible earthquake magnitude m(max) from observed data. In particular, we use basic and physically motivated assumptions to identify best cases and worst cases in terms of lowest and highest degree of uncertainty of m(max). In a general framework, we demonstrate that earthquake data and earthquake proxy data recorded in a fault zone provide almost no information about m(max) unless reliable and homogeneous data of a long time interval, including several earthquakes with magnitude close to m(max), are available. Even if detailed earthquake information from some centuries including historic and paleoearthquakes are given, only very few, namely the largest events, will contribute at all to the estimation of m(max), and this results in unacceptably high uncertainties. As a consequence, estimators of m(max) in a fault zone, which are based solely on earthquake-related information from this region, have to be dismissed. Y1 - 2016 U6 - https://doi.org/10.1785/0220150176 SN - 0895-0695 SN - 1938-2057 VL - 87 SP - 132 EP - 137 PB - Seismological Society of America CY - Albany ER - TY - JOUR A1 - Levy, Cyril A1 - Jimenez, Carolina Neira A1 - Paycha, Sylvie T1 - THE CANONICAL TRACE AND THE NONCOMMUTATIVE RESIDUE ON THE NONCOMMUTATIVE TORUS JF - Transactions of the American Mathematical Society N2 - Using a global symbol calculus for pseudodifferential operators on tori, we build a canonical trace on classical pseudodifferential operators on noncommutative tori in terms of a canonical discrete sum on the underlying toroidal symbols. We characterise the canonical trace on operators on the noncommutative torus as well as its underlying canonical discrete sum on symbols of fixed (resp. any) noninteger order. On the grounds of this uniqueness result, we prove that in the commutative setup, this canonical trace on the noncommutative torus reduces to Kontsevich and Vishik's canonical trace which is thereby identified with a discrete sum. A similar characterisation for the noncommutative residue on noncommutative tori as the unique trace which vanishes on trace-class operators generalises Fathizadeh and Wong's characterisation in so far as it includes the case of operators of fixed integer order. By means of the canonical trace, we derive defect formulae for regularized traces. The conformal invariance of the $ \zeta $-function at zero of the Laplacian on the noncommutative torus is then a straightforward consequence. Y1 - 2016 U6 - https://doi.org/10.1090/tran/6369 SN - 0002-9947 SN - 1088-6850 VL - 368 SP - 1051 EP - 1095 PB - American Mathematical Soc. CY - Providence ER - TY - JOUR A1 - Hack, Thomas-Paul A1 - Hanisch, Florian A1 - Schenkel, Alexander T1 - Supergeometry in Locally Covariant Quantum Field Theory JF - Communications in mathematical physics N2 - In this paper we analyze supergeometric locally covariant quantum field theories. We develop suitable categories SLoc of super-Cartan supermanifolds, which generalize Lorentz manifolds in ordinary quantum field theory, and show that, starting from a few representation theoretic and geometric data, one can construct a functor U : SLoc -> S*Alg to the category of super-*-algebras, which can be interpreted as a non-interacting super-quantum field theory. This construction turns out to disregard supersymmetry transformations as the morphism sets in the above categories are too small. We then solve this problem by using techniques from enriched category theory, which allows us to replace the morphism sets by suitable morphism supersets that contain supersymmetry transformations as their higher superpoints. We construct superquantum field theories in terms of enriched functors eU : eSLoc -> eS*Alg between the enriched categories and show that supersymmetry transformations are appropriately described within the enriched framework. As examples we analyze the superparticle in 1 vertical bar 1-dimensions and the free Wess-Zumino model in 3 vertical bar 2-dimensions. Y1 - 2016 U6 - https://doi.org/10.1007/s00220-015-2516-4 SN - 0010-3616 SN - 1432-0916 VL - 342 SP - 615 EP - 673 PB - Springer CY - New York ER - TY - JOUR A1 - Shtrakov, Slavcho A1 - Koppitz, Jörg T1 - Stable varieties of semigroups and groupoids JF - Algebra universalis N2 - The paper deals with Sigma-composition and Sigma-essential composition of terms which lead to stable and s-stable varieties of algebras. A full description of all stable varieties of semigroups, commutative and idempotent groupoids is obtained. We use an abstract reduction system which simplifies the presentations of terms of type tau - (2) to study the variety of idempotent groupoids and s-stable varieties of groupoids. S-stable varieties are a variation of stable varieties, used to highlight replacement of subterms of a term in a deductive system instead of the usual replacement of variables by terms. KW - composition of terms KW - essential position in terms KW - stable variety Y1 - 2016 U6 - https://doi.org/10.1007/s00012-015-0359-7 SN - 0002-5240 SN - 1420-8911 VL - 75 SP - 85 EP - 106 PB - Springer CY - Basel ER - TY - JOUR A1 - Nehring, Benjamin A1 - Rafler, Mathias A1 - Zessin, Hans T1 - Splitting-characterizations of the Papangelou process JF - Mathematische Nachrichten N2 - For point processes we establish a link between integration-by-parts-and splitting-formulas which can also be considered as integration-by-parts-formulas of a new type. First we characterize finite Papangelou processes in terms of their splitting kernels. The main part then consists in extending these results to the case of infinitely extended Papangelou and, in particular, Polya and Gibbs processes. (C) 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim KW - Papangelou processes KW - characterization of point processes KW - independent splittings KW - Gibbs processes Y1 - 2016 U6 - https://doi.org/10.1002/mana.201400384 SN - 0025-584X SN - 1522-2616 VL - 289 SP - 85 EP - 96 PB - Wiley-VCH CY - Weinheim ER - TY - JOUR A1 - Menne, Ulrich T1 - Sobolev functions on varifolds JF - Proceedings of the London Mathematical Society N2 - This paper introduces first-order Sobolev spaces on certain rectifiable varifolds. These complete locally convex spaces are contained in the generally non-linear class of generalised weakly differentiable functions and share key functional analytic properties with their Euclidean counterparts. Assuming the varifold to satisfy a uniform lower density bound and a dimensionally critical summability condition on its mean curvature, the following statements hold. Firstly, continuous and compact embeddings of Sobolev spaces into Lebesgue spaces and spaces of continuous functions are available. Secondly, the geodesic distance associated to the varifold is a continuous, not necessarily Holder continuous Sobolev function with bounded derivative. Thirdly, if the varifold additionally has bounded mean curvature and finite measure, then the present Sobolev spaces are isomorphic to those previously available for finite Radon measures yielding many new results for those classes as well. Suitable versions of the embedding results obtained for Sobolev functions hold in the larger class of generalised weakly differentiable functions. Y1 - 2016 U6 - https://doi.org/10.1112/plms/pdw023 SN - 0024-6115 SN - 1460-244X VL - 113 SP - 725 EP - 774 PB - Oxford Univ. Press CY - Oxford ER - TY - JOUR A1 - Kröncke, Klaus T1 - Rigidity and Infinitesimal Deformability of Ricci Solitons JF - The journal of geometric analysis N2 - In this paper, an obstruction against the integrability of certain infinitesimal solitonic deformations is given. Using this obstruction, we show that the complex projective spaces of even complex dimension are rigid as Ricci solitons although they have infinitesimal solitonic deformations. KW - Ricci solitons KW - Moduli space KW - Linearized equation KW - Integrability Y1 - 2016 U6 - https://doi.org/10.1007/s12220-015-9608-4 SN - 1050-6926 SN - 1559-002X VL - 26 SP - 1795 EP - 1807 PB - Springer CY - New York ER - TY - THES A1 - Scharrer, Christian T1 - Relating diameter and mean curvature for varifolds T1 - Relativer Diameter und mittlere Krümmung für Varifaltigkeiten N2 - The main results of this thesis are formulated in a class of surfaces (varifolds) generalizing closed and connected smooth submanifolds of Euclidean space which allows singularities. Given an indecomposable varifold with dimension at least two in some Euclidean space such that the first variation is locally bounded, the total variation is absolutely continuous with respect to the weight measure, the density of the weight measure is at least one outside a set of weight measure zero and the generalized mean curvature is locally summable to a natural power (dimension of the varifold minus one) with respect to the weight measure. The thesis presents an improved estimate of the set where the lower density is small in terms of the one dimensional Hausdorff measure. Moreover, if the support of the weight measure is compact, then the intrinsic diameter with respect to the support of the weight measure is estimated in terms of the generalized mean curvature. This estimate is in analogy to the diameter control for closed connected manifolds smoothly immersed in some Euclidean space of Peter Topping. Previously, it was not known whether the hypothesis in this thesis implies that two points in the support of the weight measure have finite geodesic distance. N2 - Die wichtigsten Ergebnisse dieser Arbeit sind formuliert für eine Klasse von Oberflächen (Varifaltigkeiten), welche geschlossene glatte Untermannigfaltigkeiten des euklidischen Raums verallgemeinern und Singularitäten erlauben. Gegeben sei eine mindestens zwei-dimensionale unzerlegbare Varifaltigkeit im euklidischen Raum, sodass die erste Variation lokal beschränkt ist, die totale Variation absolut stetig bezüglich dem Gewichtsmaß ist, die Dichte des Gewichtsmaßes außerhalb einer Nullmenge mindesten eins ist, und die verallgemeinerte mittlere Krümmung bezüglich dem Gewichtsmaß lokal summierbar zu einer natürlichen Potenz (Dimension der Varifaltigkeit minus eins) ist. Es wird die Menge, wo die untere Dichte klein ist, durch das ein-dimensionale Hausdorff-Maß abgeschätzt. Das Ergebnis ist eine neue, stark verbesserte untere Dichte-Schranke. Ist der Träger des Gewichtsmaßes kompakt, so wird der intrinsische Diameter des Trägers des Gewichtsmaßes abgeschätzt durch ein Integral der verallgemeinerten mittleren Krümmung. Diese Ungleichung ist analog zu der Ungleichung von Peter Topping für geschlossene zusammenhängende Mannigfaltigkeit, welche durch eine glatte Immersion in den euklidischen Raum eingebettet sind. Bisher war nicht bekannt, dass die oben genannten Annahmen an die Varifaltigkeit implizieren, dass der geodätische Abstand zweier Punkte aus dem Träger des Gewichtsmaßes endlich ist. KW - varifold KW - rectifiable varifold KW - indecomposable varifold KW - first variation KW - mean curvature KW - isoperimetric inequality KW - density of a measure KW - geodesic distance KW - intrinsic diameter KW - Varifaltigkeit KW - rektifizierbare Varifaltigkeit KW - unzerlegbare Varifaltigkeit KW - erste Variation KW - mittlere Krümmung KW - isoperimetrische Ungleichung KW - Dichte eines Maßes KW - geodätischer Abstand KW - intrinsischer Diameter Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-97013 ER - TY - THES A1 - Cheng, Yuan T1 - Recursive state estimation in dynamical systems Y1 - 2016 ER - TY - JOUR A1 - Sinclair, Nathalie A1 - Bussi, Maria G. Bartolini A1 - de Villiers, Michael A1 - Jones, Keith A1 - Kortenkamp, Ulrich A1 - Leung, Allen A1 - Owens, Kay T1 - Recent research on geometry education: an ICME-13 survey team report JF - ZDM : The International Journal on Mathematics Education N2 - This survey on the theme of Geometry Education (including new technologies) focuses chiefly on the time span since 2008. Based on our review of the research literature published during this time span (in refereed journal articles, conference proceedings and edited books), we have jointly identified seven major threads of contributions that span from the early years of learning (pre-school and primary school) through to post-compulsory education and to the issue of mathematics teacher education for geometry. These threads are as follows: developments and trends in the use of theories; advances in the understanding of visuo spatial reasoning; the use and role of diagrams and gestures; advances in the understanding of the role of digital technologies; advances in the understanding of the teaching and learning of definitions; advances in the understanding of the teaching and learning of the proving process; and, moving beyond traditional Euclidean approaches. Within each theme, we identify relevant research and also offer commentary on future directions. KW - Geometry KW - Technology KW - Diagrams KW - Definitions KW - Gestures KW - Proving KW - Digital technology KW - Visuospatial reasoning Y1 - 2016 U6 - https://doi.org/10.1007/s11858-016-0796-6 SN - 1863-9690 SN - 1863-9704 VL - 48 SP - 691 EP - 719 PB - Springer CY - Heidelberg ER - TY - JOUR A1 - Chang, D. -C. A1 - Viahmoudi, M. Hedayat A1 - Schulze, Bert-Wolfgang T1 - PSEUDO-DIFFERENTIAL ANALYSIS WITH TWISTED SYMBOLIC STRUCTURE JF - Journal of nonlinear and convex analysis : an international journal N2 - This paper is devoted to pseudo-differential operators and new applications. We establish necessary extensions of the standard calculus to specific classes of operator-valued symbols occurring in principal symbolic hierarchies of operators on manifolds with singularities or stratified spaces. KW - Pseudo-differential operators KW - boundary value problems KW - operator valued symbols KW - Fourier transform Y1 - 2016 SN - 1345-4773 SN - 1880-5221 VL - 17 SP - 1889 EP - 1937 PB - Yokohama Publishers CY - Yokohama ER - TY - THES A1 - Ludewig, Matthias T1 - Path integrals on manifolds with boundary and their asymptotic expansions T1 - Pfadintegrale auf Mannigfaltigkeiten mit Rand und ihre asymptotischen Entwicklungen N2 - It is "scientific folklore" coming from physical heuristics that solutions to the heat equation on a Riemannian manifold can be represented by a path integral. However, the problem with such path integrals is that they are notoriously ill-defined. One way to make them rigorous (which is often applied in physics) is finite-dimensional approximation, or time-slicing approximation: Given a fine partition of the time interval into small subintervals, one restricts the integration domain to paths that are geodesic on each subinterval of the partition. These finite-dimensional integrals are well-defined, and the (infinite-dimensional) path integral then is defined as the limit of these (suitably normalized) integrals, as the mesh of the partition tends to zero. In this thesis, we show that indeed, solutions to the heat equation on a general compact Riemannian manifold with boundary are given by such time-slicing path integrals. Here we consider the heat equation for general Laplace type operators, acting on sections of a vector bundle. We also obtain similar results for the heat kernel, although in this case, one has to restrict to metrics satisfying a certain smoothness condition at the boundary. One of the most important manipulations one would like to do with path integrals is taking their asymptotic expansions; in the case of the heat kernel, this is the short time asymptotic expansion. In order to use time-slicing approximation here, one needs the approximation to be uniform in the time parameter. We show that this is possible by giving strong error estimates. Finally, we apply these results to obtain short time asymptotic expansions of the heat kernel also in degenerate cases (i.e. at the cut locus). Furthermore, our results allow to relate the asymptotic expansion of the heat kernel to a formal asymptotic expansion of the infinite-dimensional path integral, which gives relations between geometric quantities on the manifold and on the loop space. In particular, we show that the lowest order term in the asymptotic expansion of the heat kernel is essentially given by the Fredholm determinant of the Hessian of the energy functional. We also investigate how this relates to the zeta-regularized determinant of the Jacobi operator along minimizing geodesics. N2 - Es ist "wissenschaftliche Folklore", abgeleitet von der physikalischen Anschauung, dass Lösungen der Wärmeleitungsgleichung auf einer riemannschen Mannigfaltigkeit als Pfadintegrale dargestellt werden können. Das Problem mit Pfadintegralen ist allerdings, dass schon deren Definition Mathematiker vor gewisse Probleme stellt. Eine Möglichkeit, Pfadintegrale rigoros zu definieren ist endlich-dimensionale Approximation, oder time-slicing-Approximation: Für eine gegebene Unterteilung des Zeitintervals in kleine Teilintervalle schränkt man den Integrationsbereich auf diejenigen Pfade ein, die auf jedem Teilintervall geodätisch sind. Diese endlichdimensionalen Integrale sind wohldefiniert, und man definiert das (unendlichdimensionale) Pfadintegral als den Limes dieser (passend normierten) Integrale, wenn die Feinheit der Unterteilung gegen Null geht. In dieser Arbeit wird gezeigt, dass Lösungen der Wärmeleitungsgleichung auf einer allgemeinen riemannschen Mannigfaltigkeit tatsächlich durch eine solche endlichdimensionale Approximation gegeben sind. Hierbei betrachten wir die Wärmeleitungsgleichung für allgemeine Operatoren von Laplace-Typ, die auf Schnitten in Vektorbündeln wirken. Wir zeigen auch ähnliche Resultate für den Wärmekern, wobei wir uns allerdings auf Metriken einschränken müssen, die eine gewisse Glattheitsbedingung am Rand erfüllen. Eine der wichtigsten Manipulationen, die man an Pfadintegralen vornehmen möchte, ist das Bilden ihrer asymptotischen Entwicklungen; in Falle des Wärmekerns ist dies die Kurzzeitasymptotik. Um die endlich-dimensionale Approximation hier nutzen zu können, ist es nötig, dass die Approximation uniform im Zeitparameter ist. Dies kann in der Tat erreicht werden; zu diesem Zweck geben wir starke Fehlerabschätzungen an. Schließlich wenden wir diese Resultate an, um die Kurzzeitasymptotik des Wärmekerns (auch im degenerierten Fall, d.h. am Schittort) herzuleiten. Unsere Resultate machen es außerdem möglich, die asymptotische Entwicklung des Wärmekerns mit einer formalen asymptotischen Entwicklung der unendlichdimensionalen Pfadintegrale in Verbindung zu bringen. Auf diese Weise erhält man Beziehungen zwischen geometrischen Größen der zugrundeliegenden Mannigfaltigkeit und solchen des Pfadraumes. Insbesondere zeigen wir, dass der Term niedrigster Ordnung in der asymptotischen Entwicklung des Wärmekerns im Wesentlichen durch die Fredholm-Determinante der Hesseschen des Energie-Funktionals gegeben ist. Weiterhin untersuchen wir die Verbindung zur zeta-regularisierten Determinante des Jakobi-Operators entlang von minimierenden Geodätischen. KW - heat equation KW - heat kernel KW - path integral KW - determinant KW - asymptotic expansion KW - Laplace expansion KW - heat asymptotics KW - Wiener measure KW - Wärmeleitungsgleichung KW - Wärmekern KW - Pfadintegrale KW - asymptotische Entwicklung KW - Determinante Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-94387 ER - TY - JOUR A1 - Benini, Marco T1 - Optimal space of linear classical observables for Maxwell k-forms via spacelike and timelike compact de Rham cohomologies JF - Journal of mathematical physics N2 - Being motivated by open questions in gauge field theories, we consider non-standard de Rham cohomology groups for timelike compact and spacelike compact support systems. These cohomology groups are shown to be isomorphic respectively to the usual de Rham cohomology of a spacelike Cauchy surface and its counterpart with compact support. Furthermore, an analog of the usual Poincare duality for de Rham cohomology is shown to hold for the case with non-standard supports as well. We apply these results to find optimal spaces of linear observables for analogs of arbitrary degree k of both the vector potential and the Faraday tensor. The term optimal has to be intended in the following sense: The spaces of linear observables we consider distinguish between different configurations; in addition to that, there are no redundant observables. This last point in particular heavily relies on the analog of Poincare duality for the new cohomology groups. Published by AIP Publishing. Y1 - 2016 U6 - https://doi.org/10.1063/1.4947563 SN - 0022-2488 SN - 1089-7658 VL - 57 SP - 1249 EP - 1279 PB - American Institute of Physics CY - Melville ER - TY - INPR A1 - Blanchard, Gilles A1 - Mücke, Nicole T1 - Optimal rates for regularization of statistical inverse learning problems N2 - We consider a statistical inverse learning problem, where we observe the image of a function f through a linear operator A at i.i.d. random design points X_i, superposed with an additional noise. The distribution of the design points is unknown and can be very general. We analyze simultaneously the direct (estimation of Af) and the inverse (estimation of f) learning problems. In this general framework, we obtain strong and weak minimax optimal rates of convergence (as the number of observations n grows large) for a large class of spectral regularization methods over regularity classes defined through appropriate source conditions. This improves on or completes previous results obtained in related settings. The optimality of the obtained rates is shown not only in the exponent in n but also in the explicit dependence of the constant factor in the variance of the noise and the radius of the source condition set. T3 - Preprints des Instituts für Mathematik der Universität Potsdam - 5 (2016) 5 KW - statistical inverse problem KW - minimax rate KW - kernel method Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-89782 SN - 2193-6943 VL - 5 IS - 5 PB - Universitätsverlag Potsdam CY - Potsdam ER - TY - THES A1 - Lyu, Xiaojing T1 - Operators on singular manifolds T1 - Operatoren auf singuläre Mannigfaltigkeiten N2 - We study the interplay between analysis on manifolds with singularities and complex analysis and develop new structures of operators based on the Mellin transform and tools for iterating the calculus for higher singularities. We refer to the idea of interpreting boundary value problems (BVPs) in terms of pseudo-differential operators with a principal symbolic hierarchy, taking into account that BVPs are a source of cone and edge operator algebras. The respective cone and edge pseudo-differential algebras in turn are the starting point of higher corner theories. In addition there are deep relationships between corner operators and complex analysis. This will be illustrated by the Mellin symbolic calculus. N2 - Wir studieren den Zusammenhang zwischen Analysis auf Mannigfaltigkeiten mit Singularitäten und komplexer Analysis und entwickeln neue Strukturen von Operatoren basierend auf der Mellin-Transformation und Hilfsmitteln für die Iteration des Kalküls für höhere Singularitäten. Wir beziehen uns auf die Idee von der Interpretation von Randwert-Problemen (BVPs) durch Pseudo-Differential-operatoren und Hauptsymbol-Hierarchien, unter Berüksichtigung der Tatsache, dass BVPs eine Quelle von Konus- und Kanten-Operator- algebren sind. Die betreffenden Konus- und Kanten-Pseudo-differentiellen Algebren sind wiederum der Startpunkt von höheren Eckentheorien. Zusätzlich bestehen tiefe Beziehungen zwischen Ecken-Operatoren und komplexer Analysis. Dies wird illustiert durch den Mellin-Symbol Kalkül. KW - order filtration KW - Mellin-Symbols KW - singular manifolds KW - Ordnungs-Filtrierung KW - Mellin-Symbole KW - singuläre Mannigfaltigkeiten Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-103643 ER - TY - THES A1 - Mazzonetto, Sara T1 - On the exact simulation of (skew) Brownian diffusions with discontinuous drift T1 - Über die exakte Simulation (skew) Brownsche Diffusionen mit unstetiger Drift T1 - Simulation exacte de diffusions browniennes (biaisées) avec dérive discontinue N2 - This thesis is focused on the study and the exact simulation of two classes of real-valued Brownian diffusions: multi-skew Brownian motions with constant drift and Brownian diffusions whose drift admits a finite number of jumps. The skew Brownian motion was introduced in the sixties by Itô and McKean, who constructed it from the reflected Brownian motion, flipping its excursions from the origin with a given probability. Such a process behaves as the original one except at the point 0, which plays the role of a semipermeable barrier. More generally, a skew diffusion with several semipermeable barriers, called multi-skew diffusion, is a diffusion everywhere except when it reaches one of the barriers, where it is partially reflected with a probability depending on that particular barrier. Clearly, a multi-skew diffusion can be characterized either as solution of a stochastic differential equation involving weighted local times (these terms providing the semi-permeability) or by its infinitesimal generator as Markov process. In this thesis we first obtain a contour integral representation for the transition semigroup of the multiskew Brownian motion with constant drift, based on a fine analysis of its complex properties. Thanks to this representation we write explicitly the transition densities of the two-skew Brownian motion with constant drift as an infinite series involving, in particular, Gaussian functions and their tails. Then we propose a new useful application of a generalization of the known rejection sampling method. Recall that this basic algorithm allows to sample from a density as soon as one finds an - easy to sample - instrumental density verifying that the ratio between the goal and the instrumental densities is a bounded function. The generalized rejection sampling method allows to sample exactly from densities for which indeed only an approximation is known. The originality of the algorithm lies in the fact that one finally samples directly from the law without any approximation, except the machine's. As an application, we sample from the transition density of the two-skew Brownian motion with or without constant drift. The instrumental density is the transition density of the Brownian motion with constant drift, and we provide an useful uniform bound for the ratio of the densities. We also present numerical simulations to study the efficiency of the algorithm. The second aim of this thesis is to develop an exact simulation algorithm for a Brownian diffusion whose drift admits several jumps. In the literature, so far only the case of a continuous drift (resp. of a drift with one finite jump) was treated. The theoretical method we give allows to deal with any finite number of discontinuities. Then we focus on the case of two jumps, using the transition densities of the two-skew Brownian motion obtained before. Various examples are presented and the efficiency of our approach is discussed. N2 - In dieser Dissertation wird die exakte Simulation zweier Klassen reeller Brownscher Diffusionen untersucht: die multi-skew Brownsche Bewegung mit konstanter Drift sowie die Brownsche Diffusionen mit einer Drift mit endlich vielen Sprüngen. Die skew Brownsche Bewegung wurde in den sechzigern Jahren von Itô and McKean als eine Brownsche Bewegung eingeführt, für die die Richtung ihrer Exkursionen am Ursprung zufällig mit einer gegebenen Wahrscheinlichkeit ausgewürfelt wird. Solche asymmetrischen Prozesse verhalten sich im Wesentlichen wie der Originalprozess außer bei 0, das sich wie eine semipermeable Barriere verhält. Allgemeiner sind skew Diffusionsprozesse mit mehreren semipermeablen Barrieren, auch multi-skew Diffusionen genannt, Diffusionsprozesse mit Ausnahme an den Barrieren, wo sie jeweils teilweise reflektiert wird. Natürlich ist eine multi-skew Diffusion durch eine stochastische Differentialgleichung mit Lokalzeiten (diese bewirken die Semipermeabilität) oder durch ihren infinitesimalen Generator als Markov Prozess charakterisiert. In dieser Arbeit leiten wir zunächst eine Konturintegraldarstellung der Übergangshalbgruppe der multi-skew Brownschen Bewegung mit konstanter Drift durch eine feine Analyse ihrer komplexen Eigenschaften her. Dank dieser Darstellung wird eine explizite Darstellung der Übergangswahrscheinlichkeiten der zweifach-skew Brownschen Bewegung mit konstanter Drift als eine unendliche Reihe Gaußscher Dichten erhalten. Anschlieẞend wird eine nützliche Verallgemeinerung der bekannten Verwerfungsmethode vorgestellt. Dieses grundlegende Verfahren ermöglicht Realisierungen von Zufallsvariablen, sobald man eine leicht zu simulierende Zufallsvariable derart findet, dass der Quotient der Dichten beider Zufallsvariablen beschränkt ist. Die verallgmeinerte Verwerfungsmethode erlaubt eine exakte Simulation für Dichten, die nur approximiert werden können. Die Originalität unseres Verfahrens liegt nun darin, dass wir, abgesehen von der rechnerbedingten Approximation, exakt von der Verteilung ohne Approximation simulieren. In einer Anwendung simulieren wir die zweifach-skew Brownsche Bewegung mit oder ohne konstanter Drift. Die Ausgangsdichte ist dabei die der Brownschen Bewegung mit konstanter Drift, und wir geben gleichmäẞige Schranken des Quotienten der Dichten an. Dazu werden numerische Simulationen gezeigt, um die Leistungsfähigkeit des Verfahrens zu demonstrieren. Das zweite Ziel dieser Arbeit ist die Entwicklung eines exakten Simulationsverfahrens für Brownsche Diffusionen, deren Drift mehrere Sprünge hat. In der Literatur wurden bisher nur Diffusionen mit stetiger Drift bzw. mit einer Drift mit höchstens einem Sprung behandelt. Unser Verfahren erlaubt den Umgang mit jeder endlichen Anzahl von Sprüngen. Insbesondere wird der Fall zweier Sprünge behandelt, da unser Simulationsverfahren mit den bereits erhaltenen Übergangswahrscheinlichkeiten der zweifach-skew Brownschen Bewegung verwandt ist. An mehreren Beispielen demonstrieren wir die Effizienz unseres Ansatzes. KW - exact simulation KW - exakte Simulation KW - skew diffusions KW - Skew Diffusionen KW - local time KW - discontinuous drift KW - diskontinuierliche Drift Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-102399 ER - TY - JOUR A1 - Keller, Matthias A1 - Lenz, Daniel A1 - Münch, Florentin A1 - Schmidt, Marcel A1 - Telcs, Andras T1 - Note on short-time behavior of semigroups associated to self-adjoint operators JF - Bulletin of the London Mathematical Society N2 - We present a simple observation showing that the heat kernel on a locally finite graph behaves for short times t roughly like t(d), where d is the combinatorial distance. This is very different from the classical Varadhan-type behavior on manifolds. Moreover, this also gives that short-time behavior and global behavior of the heat kernel are governed by two different metrics whenever the degree of the graph is not uniformly bounded. Y1 - 2016 U6 - https://doi.org/10.1112/blms/bdw054 SN - 0024-6093 SN - 1469-2120 VL - 48 SP - 935 EP - 944 PB - Oxford Univ. Press CY - Oxford ER - TY - JOUR A1 - Gregory, A. A1 - Cotter, C. J. A1 - Reich, Sebastian T1 - MULTILEVEL ENSEMBLE TRANSFORM PARTICLE FILTERING JF - SIAM journal on scientific computing N2 - This paper extends the multilevel Monte Carlo variance reduction technique to nonlinear filtering. In particular, multilevel Monte Carlo is applied to a certain variant of the particle filter, the ensemble transform particle filter (EPTF). A key aspect is the use of optimal transport methods to re-establish correlation between coarse and fine ensembles after resampling; this controls the variance of the estimator. Numerical examples present a proof of concept of the effectiveness of the proposed method, demonstrating significant computational cost reductions (relative to the single-level ETPF counterpart) in the propagation of ensembles. KW - multilevel Monte Carlo KW - sequential data assimilation KW - optimal transport Y1 - 2016 U6 - https://doi.org/10.1137/15M1038232 SN - 1064-8275 SN - 1095-7197 VL - 38 SP - A1317 EP - A1338 PB - Society for Industrial and Applied Mathematics CY - Philadelphia ER - TY - JOUR A1 - Pornsawad, Pornsarp A1 - Böckmann, Christine T1 - Modified Iterative Runge-Kutta-Type Methods for Nonlinear Ill-Posed Problems JF - Numerical functional analysis and optimization : an international journal of rapid publication N2 - This work is devoted to the convergence analysis of a modified Runge-Kutta-type iterative regularization method for solving nonlinear ill-posed problems under a priori and a posteriori stopping rules. The convergence rate results of the proposed method can be obtained under a Holder-type sourcewise condition if the Frechet derivative is properly scaled and locally Lipschitz continuous. Numerical results are achieved by using the Levenberg-Marquardt, Lobatto, and Radau methods. KW - Nonlinear ill-posed problems KW - Runge-Kutta methods KW - regularization methods KW - Holder-type source condition KW - stopping rules Y1 - 2016 U6 - https://doi.org/10.1080/01630563.2016.1219744 SN - 0163-0563 SN - 1532-2467 VL - 37 SP - 1562 EP - 1589 PB - Wiley-VCH CY - Philadelphia ER - TY - THES A1 - Wichitsa-nguan, Korakot T1 - Modifications and extensions of the logistic regression and Cox model T1 - Modifikationen und Erweiterungen des logistischen Regressionsmodells und des Cox-Modells N2 - In many statistical applications, the aim is to model the relationship between covariates and some outcomes. A choice of the appropriate model depends on the outcome and the research objectives, such as linear models for continuous outcomes, logistic models for binary outcomes and the Cox model for time-to-event data. In epidemiological, medical, biological, societal and economic studies, the logistic regression is widely used to describe the relationship between a response variable as binary outcome and explanatory variables as a set of covariates. However, epidemiologic cohort studies are quite expensive regarding data management since following up a large number of individuals takes long time. Therefore, the case-cohort design is applied to reduce cost and time for data collection. The case-cohort sampling collects a small random sample from the entire cohort, which is called subcohort. The advantage of this design is that the covariate and follow-up data are recorded only on the subcohort and all cases (all members of the cohort who develop the event of interest during the follow-up process). In this thesis, we investigate the estimation in the logistic model for case-cohort design. First, a model with a binary response and a binary covariate is considered. The maximum likelihood estimator (MLE) is described and its asymptotic properties are established. An estimator for the asymptotic variance of the estimator based on the maximum likelihood approach is proposed; this estimator differs slightly from the estimator introduced by Prentice (1986). Simulation results for several proportions of the subcohort show that the proposed estimator gives lower empirical bias and empirical variance than Prentice's estimator. Then the MLE in the logistic regression with discrete covariate under case-cohort design is studied. Here the approach of the binary covariate model is extended. Proving asymptotic normality of estimators, standard errors for the estimators can be derived. The simulation study demonstrates the estimation procedure of the logistic regression model with a one-dimensional discrete covariate. Simulation results for several proportions of the subcohort and different choices of the underlying parameters indicate that the estimator developed here performs reasonably well. Moreover, the comparison between theoretical values and simulation results of the asymptotic variance of estimator is presented. Clearly, the logistic regression is sufficient for the binary outcome refers to be available for all subjects and for a fixed time interval. Nevertheless, in practice, the observations in clinical trials are frequently collected for different time periods and subjects may drop out or relapse from other causes during follow-up. Hence, the logistic regression is not appropriate for incomplete follow-up data; for example, an individual drops out of the study before the end of data collection or an individual has not occurred the event of interest for the duration of the study. These observations are called censored observations. The survival analysis is necessary to solve these problems. Moreover, the time to the occurence of the event of interest is taken into account. The Cox model has been widely used in survival analysis, which can effectively handle the censored data. Cox (1972) proposed the model which is focused on the hazard function. The Cox model is assumed to be λ(t|x) = λ0(t) exp(β^Tx) where λ0(t) is an unspecified baseline hazard at time t and X is the vector of covariates, β is a p-dimensional vector of coefficient. In this thesis, the Cox model is considered under the view point of experimental design. The estimability of the parameter β0 in the Cox model, where β0 denotes the true value of β, and the choice of optimal covariates are investigated. We give new representations of the observed information matrix In(β) and extend results for the Cox model of Andersen and Gill (1982). In this way conditions for the estimability of β0 are formulated. Under some regularity conditions, ∑ is the inverse of the asymptotic variance matrix of the MPLE of β0 in the Cox model and then some properties of the asymptotic variance matrix of the MPLE are highlighted. Based on the results of asymptotic estimability, the calculation of local optimal covariates is considered and shown in examples. In a sensitivity analysis, the efficiency of given covariates is calculated. For neighborhoods of the exponential models, the efficiencies have then been found. It is appeared that for fixed parameters β0, the efficiencies do not change very much for different baseline hazard functions. Some proposals for applicable optimal covariates and a calculation procedure for finding optimal covariates are discussed. Furthermore, the extension of the Cox model where time-dependent coefficient are allowed, is investigated. In this situation, the maximum local partial likelihood estimator for estimating the coefficient function β(·) is described. Based on this estimator, we formulate a new test procedure for testing, whether a one-dimensional coefficient function β(·) has a prespecified parametric form, say β(·; ϑ). The score function derived from the local constant partial likelihood function at d distinct grid points is considered. It is shown that the distribution of the properly standardized quadratic form of this d-dimensional vector under the null hypothesis tends to a Chi-squared distribution. Moreover, the limit statement remains true when replacing the unknown ϑ0 by the MPLE in the hypothetical model and an asymptotic α-test is given by the quantiles or p-values of the limiting Chi-squared distribution. Finally, we propose a bootstrap version of this test. The bootstrap test is only defined for the special case of testing whether the coefficient function is constant. A simulation study illustrates the behavior of the bootstrap test under the null hypothesis and a special alternative. It gives quite good results for the chosen underlying model. References P. K. Andersen and R. D. Gill. Cox's regression model for counting processes: a large samplestudy. Ann. Statist., 10(4):1100{1120, 1982. D. R. Cox. Regression models and life-tables. J. Roy. Statist. Soc. Ser. B, 34:187{220, 1972. R. L. Prentice. A case-cohort design for epidemiologic cohort studies and disease prevention trials. Biometrika, 73(1):1{11, 1986. N2 - In vielen statistischen Anwendungen besteht die Aufgabe darin, die Beziehung zwischen Einflussgrößen und einer Zielgröße zu modellieren. Die Wahl eines geeigneten Modells hängt vom Typ der Zielgröße und vom Ziel der Untersuchung ab - während lineare Modelle für die Beschreibung des Zusammenhanges stetiger Outputs und Einflussgrößen genutzt werden, dienen logistische Regressionsmodelle zur Modellierung binärer Zielgrößen und das Cox-Modell zur Modellierung von Lebendauer-Daten. In epidemiologischen, medizinischen, biologischen, sozialen und ökonomischen Studien wird oftmals die logistische Regression angewendet, um den Zusammenhang zwischen einer binären Zielgröße und den erklärenden Variablen, den Kovariaten, zu modellieren. In epidemiologischen Studien muss häufig eine große Anzahl von Individuen für eine lange Zeit beobachtet werden. Um hierbei Kosten zu reduzieren, wird ein "Case-Cohort-Design" angewendet. Hierbei werden die Einflussgrößen nur für die Individuen erfasst, für die das interessierende Ereignis eintritt, und für eine zufällig gewählte kleine Teilmenge von Individuen, die Subkohorte. In der vorliegenden Arbeit wird das Schätzen im logistischen Regressionsmodell unter Case-Cohort-Design betrachtet. Für den Fall, dass auch die Kovariate binär ist, wurde bereits von Prentice (1986) die asymptotische Normalität des Maximum-Likelihood-Schätzers für den Logarithmus des "odds ratio", einen Parameter, der den Effekt der Kovariate charakterisiert, angegeben. In dieser Arbeit wird über einen Maximum-Likelihood-Zugang ein Schätzer für die Varianz der Grenzverteilung hergeleitet, für den durch empirische Untersuchungen gezeigt wird, dass er dem von Prentice überlegen ist. Ausgehend von dem binärem Kovariate-Modell werden Maximum-Likelihood-Schätzer für logistische Regressionsmodelle mit diskreten Kovariaten unter Case-Cohort-Design hergeleitet. Die asymptotische Normalität wird gezeigt; darauf aufbauend können Formeln für die Standardfehler angegeben werden. Simulationsstudien ergänzen diesen Abschnitt. Sie zeigen den Einfluss des Umfanges der Subkohorte auf die Varianz der Schätzer. Logistische Regression ist geeignet, wenn man das interessierende Ereignis für alle Individuen beobachten kann und wenn man ein festes Zeitintervall betrachtet. Will man die Zeit bis zum Eintreten eines Ereignisses bei der Untersuchung der Wirkung der Kovariate berücksichtigen, so sind Lebensdauermodelle angemessen. Hierbei können auch zensierte Daten behandelt werden. Ein sehr häufig angewendetes Regressionsmodell ist das von Cox (1972) vorgeschlagene, bei dem die Hazardrate durch λ(t|x) = λ0(t) exp(β^Tx) definiert ist. Hierbei ist λ0(t) eine unspezifizierte Baseline-Hazardrate und X ist ein Kovariat-Vektor, β ist ein p-dimensionaler Koeffizientenvektor. Nachdem ein Überblick über das Schätzen und Testen im Cox-Modell und seinen Erweiterungen gegeben wird, werden Aussagen zur Schätzbarkeit des Parameters β durch die "partiallikelihood"- Methode hergeleitet. Grundlage hierzu sind neue Darstellungen der beobachteten Fisher-Information, die die Ergebnisse von Andersen and Gill (1982) erweitern. Unter Regularitätsbedingungen ist der Schätzer asymptotisch normal; die Inverse der Grenzmatrix der Fisher-Information ist die Varianzmatrix der Grenzverteilung. Bedingungen für die Nichtsingularität dieser Grenzmatrix führen zum Begriff der asymptotischen Schätzbarkeit, der in der vorliegenden Arbeit ausführlich untersucht wird. Darüber hinaus ist diese Matrix Grundlage für die Herleitung lokal optimaler Kovariate. In einer Sensitivitätsanalyse wird die Effizienz gewählter Kovariate berechnet. Die Berechnungen zeigen, dass die Baseline-Verteilung nur wenig Einfluss auf die Effizienz hat. Entscheidend ist die Wahl der Kovariate. Es werden einige Vorschläge für anwendbare optimale Kovariate und Berechnungsverfahren für das Auffinden optimaler Kovariate diskutiert. Eine Erweiterung des Cox-Modells besteht darin, zeitabhängige Koeffizienten zuzulassen. Da diese Koeffizientenfunktionen nicht näher spezifiziert sind, werden sie nichtparametrisch geschätzt. Eine mögliche Methode ist die "local-linear-partial-likelihood"-Methode, deren Eigenschaften beispielsweise in der Arbeit von Cai and Sun (2003) untersucht wurden. In der vorliegenden Arbeit werden Simulationen zu dieser Methode durchgeführt. Hauptaspekt ist das Testen der Koeffizientenfunktion. Getestet wird, ob diese Funktion eine bestimmte parametrische Form besitzt. Betrachtet wird der Score-Vektor, der von der "localconstant-partial-likelihood"-Funktion abgeleitet wird. Ausgehend von der asymptotischen Normalität dieses Vektors an verschiedenen Gitterpunkten kann gezeigt werden, dass die Verteilung der geeignet standardisierten quadratischen Form unter der Nullhypothese gegen eine Chi-Quadrat-Verteilung konvergiert. Die Eigenschaften des auf dieser Grenzverteilungsaussage aufbauenden Tests hängen nicht nur vom Stichprobenumfang, sondern auch vom verwendeten Glättungsparameter ab. Deshalb ist es sinnvoll, auch einen Bootstrap-Test zu betrachten. In der vorliegenden Arbeit wird ein Bootstrap-Test zum Testen der Hypothese, dass die Koeffizienten-Funktion konstant ist, d.h. dass das klassische Cox-Modell vorliegt, vorgeschlagen. Der Algorithmus wird angegeben. Simulationen zum Verhalten dieses Tests unter der Nullhypothese und einer speziellen Alternative werden durchgeführt. Literatur P. K. Andersen and R. D. Gill. Cox's regression model for counting processes: a large sample study. Ann. Statist., 10(4):1100{1120, 1982. Z. Cai and Y. Sun. Local linear estimation for time-dependent coefficients in Cox's regression models. Scand. J. Statist., 30(1):93-111, 2003. D. R. Cox. Regression models and life-tables. J. Roy. Statist. Soc. Ser. B, 34:187-220, 1972. R. L. Prentice. A case-cohort design for epidemiologic cohort studies and disease prevention trials. Biometrika, 73(1):1-11, 1986. KW - survival analysis KW - Cox model KW - logistic regression analysis KW - logistische Regression KW - Case-Cohort-Design KW - Cox-Modell Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-90033 ER - TY - JOUR A1 - Kistner, Saskia A1 - Vollmeyer, Regina A1 - Burns, Bruce D. A1 - Kortenkamp, Ulrich T1 - Model development in scientific discovery learning with a computer-based physics task JF - Computers in human behavior N2 - Based on theories of scientific discovery learning (SDL) and conceptual change, this study explores students' preconceptions in the domain of torques in physics and the development of these conceptions while learning with a computer-based SDL task. As a framework we used a three-space theory of SDL and focused on model space, which is supposed to contain the current conceptualization/model of the learning domain, and on its change through hypothesis testing and experimenting. Three questions were addressed: (1) What are students' preconceptions of torques before learning about this domain? To do this a multiple-choice test for assessing students' models of torques was developed and given to secondary school students (N = 47) who learned about torques using computer simulations. (2) How do students' models of torques develop during SDL? Working with simulations led to replacement of some misconceptions with physically correct conceptions. (3) Are there differential patterns of model development and if so, how do they relate to students’ use of the simulations? By analyzing individual differences in model development, we found that an intensive use of the simulations was associated with the acquisition of correct conceptions. Thus, the three-space theory provided a useful framework for understanding conceptual change in SDL. KW - Scientific discovery learning KW - Multiple problem spaces KW - Computer simulations KW - Physics concepts KW - Misconceptions KW - Conceptual change Y1 - 2016 U6 - https://doi.org/10.1016/j.chb.2016.02.041 SN - 0747-5632 SN - 1873-7692 VL - 59 SP - 446 EP - 455 PB - Elsevier CY - Oxford ER - TY - THES A1 - Samaras, Stefanos T1 - Microphysical retrieval of non-spherical aerosol particles using regularized inversion of multi-wavelength lidar data T1 - Retrieval der Mikrophysik von nichtkugelförmigen Aerosolpartikeln durch regularisierte Inversion von Mehrwellenlängen-Lidardaten N2 - Numerous reports of relatively rapid climate changes over the past century make a clear case of the impact of aerosols and clouds, identified as sources of largest uncertainty in climate projections. Earth’s radiation balance is altered by aerosols depending on their size, morphology and chemical composition. Competing effects in the atmosphere can be further studied by investigating the evolution of aerosol microphysical properties, which are the focus of the present work. The aerosol size distribution, the refractive index, and the single scattering albedo are commonly used such properties linked to aerosol type, and radiative forcing. Highly advanced lidars (light detection and ranging) have reduced aerosol monitoring and optical profiling into a routine process. Lidar data have been widely used to retrieve the size distribution through the inversion of the so-called Lorenz-Mie model (LMM). This model offers a reasonable treatment for spherically approximated particles, it no longer provides, though, a viable description for other naturally occurring arbitrarily shaped particles, such as dust particles. On the other hand, non-spherical geometries as simple as spheroids reproduce certain optical properties with enhanced accuracy. Motivated by this, we adapt the LMM to accommodate the spheroid-particle approximation introducing the notion of a two-dimensional (2D) shape-size distribution. Inverting only a few optical data points to retrieve the shape-size distribution is classified as a non-linear ill-posed problem. A brief mathematical analysis is presented which reveals the inherent tendency towards highly oscillatory solutions, explores the available options for a generalized solution through regularization methods and quantifies the ill-posedness. The latter will improve our understanding on the main cause fomenting instability in the produced solution spaces. The new approach facilitates the exploitation of additional lidar data points from depolarization measurements, associated with particle non-sphericity. However, the generalization of LMM vastly increases the complexity of the problem. The underlying theory for the calculation of the involved optical cross sections (T-matrix theory) is computationally so costly, that would limit a retrieval analysis to an unpractical point. Moreover the discretization of the model equation by a 2D collocation method, proposed in this work, involves double integrations which are further time consuming. We overcome these difficulties by using precalculated databases and a sophisticated retrieval software (SphInX: Spheroidal Inversion eXperiments) especially developed for our purposes, capable of performing multiple-dataset inversions and producing a wide range of microphysical retrieval outputs. Hybrid regularization in conjunction with minimization processes is used as a basis for our algorithms. Synthetic data retrievals are performed simulating various atmospheric scenarios in order to test the efficiency of different regularization methods. The gap in contemporary literature in providing full sets of uncertainties in a wide variety of numerical instances is of major concern here. For this, the most appropriate methods are identified through a thorough analysis on an overall-behavior basis regarding accuracy and stability. The general trend of the initial size distributions is captured in our numerical experiments and the reconstruction quality depends on data error level. Moreover, the need for more or less depolarization points is explored for the first time from the point of view of the microphysical retrieval. Finally, our approach is tested in various measurement cases giving further insight for future algorithm improvements. N2 - Zahlreiche Berichte von relativ schnellen Klimaveränderungen im vergangenen Jahrhundert liefern überzeugende Argumente über die Auswirkungen von Aerosolen und Wolken auf Wetter und Klima. Aerosole und Wolken wurden als Quellen größter Unsicherheit in Klimaprognosen identifiziert. Die Strahlungsbilanz der Erde wird verändert durch die Partikelgröße, ihre Morphologie und ihre chemische Zusammensetzung. Konkurrierende Effekte in der Atmosphäre können durch die Bestimmung von mikrophysikalischen Partikeleigenschaften weiter untersucht werden, was der Fokus der vorliegenden Arbeit ist. Die Aerosolgrößenverteilung, der Brechungsindex der Partikeln und die Einzel-Streu-Albedo sind solche häufig verwendeten Parameter, die mit dem Aerosoltyp und dem Strahlungsantrieb verbunden sind. Hoch entwickelte Lidare (Light Detection and Ranging) haben die Aerosolüberwachung und die optische Profilierung zu einem Routineprozess gemacht. Lidar-Daten wurden verwendet um die Größenverteilung zu bestimmen, was durch die Inversion des sogenannten Lorenz-Mie-Modells (LMM) gelingt. Dieses Modell bietet eine angemessene Behandlung für sphärisch angenäherte Partikeln, es stellt aber keine brauchbare Beschreibung für andere natürlich auftretende beliebig geformte Partikeln -wie z.B. Staubpartikeln- bereit. Andererseits stellt die Einbeziehung einer nicht kugelförmigen Geometrie –wie z.B. einfache Sphäroide- bestimmte optische Eigenschaften mit verbesserter Genauigkeit dar. Angesichts dieser Tatsache erweitern wir das LMM durch die Approximation von Sphäroid-Partikeln. Dazu ist es notwendig den Begriff einer zweidimensionalen Größenverteilung einzuführen. Die Inversion einer sehr geringen Anzahl optischer Datenpunkte zur Bestimmung der Form der Größenverteilung ist als ein nichtlineares schlecht gestelltes Problem bekannt. Eine kurze mathematische Analyse wird vorgestellt, die die inhärente Tendenz zu stark oszillierenden Lösungen zeigt. Weiterhin werden Optionen für eine verallgemeinerte Lösung durch Regularisierungsmethoden untersucht und der Grad der Schlechtgestelltheit quantifiziert. Letzteres wird unser Verständnis für die Hauptursache der Instabilität bei den berechneten Lösungsräumen verbessern. Der neue Ansatz ermöglicht es uns, zusätzliche Lidar-Datenpunkte aus Depolarisationsmessungen zu nutzen, die sich aus der Nicht-sphärizität der Partikeln assoziieren. Die Verallgemeinerung des LMMs erhöht erheblich die Komplexität des Problems. Die zugrundeliegende Theorie für die Berechnung der beteiligten optischen Querschnitte (T-Matrix-Ansatz) ist rechnerisch so aufwendig, dass eine Neuberechnung dieser nicht sinnvoll erscheint. Darüber hinaus wird ein zweidimensionales Kollokationsverfahren für die Diskretisierung der Modellgleichung vorgeschlagen. Dieses Verfahren beinhaltet Doppelintegrationen, die wiederum zeitaufwendig sind. Wir überwinden diese Schwierigkeiten durch Verwendung vorgerechneter Datenbanken sowie einer hochentwickelten Retrieval-Software (SphInX: Spheroidal Inversion eXperiments). Diese Software wurde speziell für unseren Zweck entwickelt und ist in der Lage mehrere Datensatzinversionen gleichzeitig durchzuführen und eine große Auswahl von mikrophysikalischen Retrieval-Ausgaben bereitzustellen. Eine hybride Regularisierung in Verbindung mit einem Minimierungsverfahren wird als Grundlage für unsere Algorithmen verwendet. Synthetische Daten-Inversionen werden mit verschiedenen atmosphärischen Szenarien durchgeführt, um die Effizienz verschiedener Regularisierungsmethoden zu untersuchen. Die Lücke in der gegenwärtigen wissenschaftlichen Literatur gewisse Unsicherheiten durch breitgefächerte numerische Fälle bereitzustellen, ist ein Hauptanliegen dieser Arbeit. Motiviert davon werden die am besten geeigneten Verfahren einer gründlichen Analyse in Bezug auf ihr Gesamtverhalten, d.h. Genauigkeit und Stabilität, unterzogen. Der allgemeine Trend der Anfangsgrößenverteilung wird in unseren numerischen Experimenten erfasst. Zusätzlich hängt die Rekonstruktionsqualität vom Datenfehler ab. Darüber hinaus wird die Anzahl der notwendigen Depolarisationspunkte zum ersten Mal aus der Sicht des mikrophysikalischen Parameter-Retrievals erforscht. Abschließend verwenden wir unsere Software für verschiedene Messfälle, was weitere Einblicke für künftige Verbesserungen des Algorithmus gibt. KW - microphysics KW - retrieval KW - lidar KW - aerosols KW - regularization KW - ill-posed KW - inversion KW - Mikrophysik KW - Retrieval KW - Lidar KW - Aerosole KW - Regularisierung KW - schlecht gestellt KW - Inversion Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-396528 ER - TY - JOUR A1 - Müller, Detlef A1 - Böckmann, Christine A1 - Kolgotin, Alexei A1 - Schneidenbach, Lars A1 - Chemyakin, Eduard A1 - Rosemann, Julia A1 - Znak, Pavel A1 - Romanov, Anton T1 - Microphysical particle properties derived from inversion algorithms developed in the framework of EARLINET JF - Atmospheric measurement techniques : an interactive open access journal of the European Geosciences Union N2 - We present a summary on the current status of two inversion algorithms that are used in EARLINET (European Aerosol Research Lidar Network) for the inversion of data collected with EARLINET multiwavelength Raman lidars. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. Development of these two algorithms started in 2000 when EARLINET was founded. The algorithms are based on a manually controlled inversion of optical data which allows for detailed sensitivity studies. The algorithms allow us to derive particle effective radius as well as volume and surface area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light absorption needs to be known with high accuracy. It is an extreme challenge to retrieve the real part with an accuracy better than 0.05 and the imaginary part with accuracy better than 0.005-0.1 or +/- 50 %. Single-scattering albedo can be computed from the retrieved microphysical parameters and allows us to categorize aerosols into high-and low-absorbing aerosols. On the basis of a few exemplary simulations with synthetic optical data we discuss the current status of these manually operated algorithms, the potentially achievable accuracy of data products, and the goals for future work. One algorithm was used with the purpose of testing how well microphysical parameters can be derived if the real part of the complex refractive index is known to at least 0.05 or 0.1. The other algorithm was used to find out how well microphysical parameters can be derived if this constraint for the real part is not applied. The optical data used in our study cover a range of Angstrom exponents and extinction-to-backscatter (lidar) ratios that are found from lidar measurements of various aerosol types. We also tested aerosol scenarios that are considered highly unlikely, e.g. the lidar ratios fall outside the commonly accepted range of values measured with Raman lidar, even though the underlying microphysical particle properties are not uncommon. The goal of this part of the study is to test the robustness of the algorithms towards their ability to identify aerosol types that have not been measured so far, but cannot be ruled out based on our current knowledge of aerosol physics. We computed the optical data from monomodal logarithmic particle size distributions, i.e. we explicitly excluded the more complicated case of bimodal particle size distributions which is a topic of ongoing research work. Another constraint is that we only considered particles of spherical shape in our simulations. We considered particle radii as large as 7-10 mu m in our simulations where the Potsdam algorithm is limited to the lower value. We considered optical-data errors of 15% in the simulation studies. We target 50% uncertainty as a reasonable threshold for our data products, though we attempt to obtain data products with less uncertainty in future work. Y1 - 2016 U6 - https://doi.org/10.5194/amt-9-5007-2016 SN - 1867-1381 SN - 1867-8548 VL - 9 SP - 5007 EP - 5035 PB - Copernicus CY - Göttingen ER - TY - JOUR A1 - Lyu, Xiaojing A1 - Schulze, Bert-Wolfgang T1 - Mellin Operators in the Edge Calculus JF - Complex analysis and operator theory N2 - A manifold M with smooth edge Y is locally near Y modelled on X-Delta x Omega for a cone X-Delta := ( (R) over bar (+) x X)/({0} x X) where Xis a smooth manifold and Omega subset of R-q an open set corresponding to a chart on Y. Compared with pseudo-differential algebras, based on other quantizations of edge-degenerate symbols, we extend the approach with Mellin representations on the r half-axis up to r = infinity, the conical exit of X-boolean AND = R+ x X (sic) (r, x) at infinity. The alternative description of the edge calculus is useful for pseudo-differential structures on manifolds with higher singularities. KW - Edge degenerate operators KW - Mellin and Green operators edge symbols Y1 - 2016 U6 - https://doi.org/10.1007/s11785-015-0511-6 SN - 1661-8254 SN - 1661-8262 VL - 10 SP - 965 EP - 1000 PB - Springer CY - Basel ER - TY - THES A1 - Gopalakrishnan, Sathej T1 - Mathematical modelling of host-disease-drug interactions in HIV disease T1 - Mathematische Modellierung von Pathogen-Wirkstoff-Wirt-Interaktionen im Kontext der HIV Erkrankung N2 - The human immunodeficiency virus (HIV) has resisted nearly three decades of efforts targeting a cure. Sustained suppression of the virus has remained a challenge, mainly due to the remarkable evolutionary adaptation that the virus exhibits by the accumulation of drug-resistant mutations in its genome. Current therapeutic strategies aim at achieving and maintaining a low viral burden and typically involve multiple drugs. The choice of optimal combinations of these drugs is crucial, particularly in the background of treatment failure having occurred previously with certain other drugs. An understanding of the dynamics of viral mutant genotypes aids in the assessment of treatment failure with a certain drug combination, and exploring potential salvage treatment regimens. Mathematical models of viral dynamics have proved invaluable in understanding the viral life cycle and the impact of antiretroviral drugs. However, such models typically use simplified and coarse-grained mutation schemes, that curbs the extent of their application to drug-specific clinical mutation data, in order to assess potential next-line therapies. Statistical models of mutation accumulation have served well in dissecting mechanisms of resistance evolution by reconstructing mutation pathways under different drug-environments. While these models perform well in predicting treatment outcomes by statistical learning, they do not incorporate drug effect mechanistically. Additionally, due to an inherent lack of temporal features in such models, they are less informative on aspects such as predicting mutational abundance at treatment failure. This limits their application in analyzing the pharmacology of antiretroviral drugs, in particular, time-dependent characteristics of HIV therapy such as pharmacokinetics and pharmacodynamics, and also in understanding the impact of drug efficacy on mutation dynamics. In this thesis, we develop an integrated model of in vivo viral dynamics incorporating drug-specific mutation schemes learned from clinical data. Our combined modelling approach enables us to study the dynamics of different mutant genotypes and assess mutational abundance at virological failure. As an application of our model, we estimate in vivo fitness characteristics of viral mutants under different drug environments. Our approach also extends naturally to multiple-drug therapies. Further, we demonstrate the versatility of our model by showing how it can be modified to incorporate recently elucidated mechanisms of drug action including molecules that target host factors. Additionally, we address another important aspect in the clinical management of HIV disease, namely drug pharmacokinetics. It is clear that time-dependent changes in in vivo drug concentration could have an impact on the antiviral effect, and also influence decisions on dosing intervals. We present a framework that provides an integrated understanding of key characteristics of multiple-dosing regimens including drug accumulation ratios and half-lifes, and then explore the impact of drug pharmacokinetics on viral suppression. Finally, parameter identifiability in such nonlinear models of viral dynamics is always a concern, and we investigate techniques that alleviate this issue in our setting. N2 - Das Humane Immundefiecienz-Virus (HIV) widerstanden hat fast drei Jahrzehnten eff Orts targeting eine Heilung. Eine anhaltende Unterdrückung des Virus hat noch eine Herausforderung, vor allem aufgrund der bemerkenswerten evolutionären Anpassung, dass das Virus Exponate durch die Ansammlung von Medikamenten-resistenten Mutationen in seinem Genom. Aktuelle therapeutische Strategien zielen auf das Erreichen und die Erhaltung einer niedrigen virale Belastung und umfassen in der Regel mehrere Medikamente. Die Wahl der optimalen Kombinationen dieser Medikamente ist von entscheidender Bedeutung, besonders im Hintergrund der Behandlung Fehler eingetreten, die zuvor mit bestimmten anderen Medikamenten. Ein Verständnis für die Dynamik der viralen mutierten Genotypen Aids in die Bewertung der Behandlung Fehler mit einer bestimmten Kombination und der Erkundung potenzieller Bergung Behandlungsschemata. Mathematische Modelle für virale Dynamik haben sich als unschätzbar erwiesen hat im Verständnis der viralen Lebenszyklus und die Auswirkungen von antiretroviralen Medikamenten. Allerdings sind solche Modelle verwenden in der Regel simplified und grobkörnigen Mutation Regelungen, dass Aufkantungen den Umfang ihrer Anwendung auf Arzneimittel-ganz speziellec Mutation klinische Daten, um zu beurteilen, mögliche nächste-line Therapien. Statistische Modelle der Mutation Anhäufung gedient haben gut in präparieren Mechanismen der Resistenz Evolution durch Mutation Rekonstruktion Pathways unter verschiedenen Medikamenten-Umgebungen. Während diese Modelle führen gut in der Vorhersage der Ergebnisse der Behandlung durch statistische lernen, sie enthalten keine Droge E ffect mechanistisch. Darüber hinaus aufgrund einer innewohnenden Mangel an zeitlichen Funktionen in solchen Modellen, sie sind weniger informativ auf Aspekte wie die Vorhersage mutational Fülle an Versagen der Behandlung. Dies schränkt die Anwendung in der Analyse der Pharmakologie von antiretroviralen Medikamenten, insbesondere, Zeit-abhängige Merkmale der HIV-Therapie wie Pharmakokinetik und Pharmakodynamik, und auch in dem Verständnis der Auswirkungen von Drogen e fficacy auf Mutation Dynamik. In dieser Arbeit, die wir bei der Entwicklung eines integrierten Modells von In-vivo-virale Dynamik Einbeziehung drug-ganz speziellec Mutation Systeme gelernt aus den klinischen Daten. Unsere kombinierten Modellansatz ermöglicht uns die Untersuchung der Dynamik von diff schiedene mutierten Genotypen und bewerten mutational Fülle an virologischem Versagen. Als Anwendung unseres Modells schätzen wir In-vivo-fitness Merkmale der viralen Mutanten unter di fferent drug Umgebungen. Unser Ansatz erstreckt sich auch natürlich auf mehrere-Therapien. Weitere zeigen wir die Vielseitigkeit unseres Modells zeigen, wie es können Modified zu integrieren kürzlich aufgeklärt Mechanismen der Drug Action einschließlich Molekülen, dass target host Faktoren. Zusätzlich haben wir Adresse ein weiterer wichtiger Aspekt in der klinischen Management der HIV-Erkrankung, das heißt Drogen Pharmakokinetik. Es ist klar, dass die Zeit-abhängige Änderungen in In-vivo-Wirkstoffkonzentration könnten die Auswirkungen auf die antivirale E ffect und haben auch Einfluss auf die Entscheidungen über Dosierungsintervalle. Wir präsentieren ein Framework, bietet ein integriertes Verständnis der wichtigsten Merkmale von mehreren Dosierungsschemata einschließlich Kumulation Übersetzungen und Halbwertszeiten, und untersuchen Sie die Auswirkungen von Drogen auf die Pharmakokinetik Virussuppression. Schließlich, Parameter identifiFähigkeit in solchen nichtlineare Modelle der virale Dynamik ist immer ein Anliegen, und wir untersuchen Methoden, um dieses Problem in unserer Einstellung. KW - HIV KW - mathematical modelling KW - viral fitness KW - pharmacokinetics KW - parameter estimation KW - HIV Erkrankung KW - Pharmakokinetik KW - Fitness KW - mathematische Modellierung KW - Kombinationstherapie Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-100100 ER - TY - BOOK A1 - Pikovskij, Arkadij A1 - Politi, Antonio T1 - Lyapunov Exponents BT - a tool to explore complex dynamics N2 - Lyapunov exponents lie at the heart of chaos theory, and are widely used in studies of complex dynamics. Utilising a pragmatic, physical approach, this self-contained book provides a comprehensive description of the concept. Beginning with the basic properties and numerical methods, it then guides readers through to the most recent advances in applications to complex systems. Practical algorithms are thoroughly reviewed and their performance is discussed, while a broad set of examples illustrate the wide range of potential applications. The description of various numerical and analytical techniques for the computation of Lyapunov exponents offers an extensive array of tools for the characterization of phenomena such as synchronization, weak and global chaos in low and high-dimensional set-ups, and localization. This text equips readers with all the investigative expertise needed to fully explore the dynamical properties of complex systems, making it ideal for both graduate students and experienced researchers. Y1 - 2016 SN - 978-1-107-03042-8 PB - Cambridge University Press CY - Cambridge ER - TY - JOUR A1 - Cattiaux, Patrick A1 - Fradon, Myriam A1 - Kulik, Alexei M. A1 - Roelly, Sylvie T1 - Long time behavior of stochastic hard ball systems JF - Bernoulli : official journal of the Bernoulli Society for Mathematical Statistics and Probability N2 - We study the long time behavior of a system of n = 2, 3 Brownian hard balls, living in R-d for d >= 2, submitted to a mutual attraction and to elastic collisions. KW - hard core interaction KW - local time KW - Lyapunov function KW - normal reflection KW - Poincare inequality KW - reversible measure KW - stochastic differential equations Y1 - 2016 U6 - https://doi.org/10.3150/14-BEJ672 SN - 1350-7265 SN - 1573-9759 VL - 22 SP - 681 EP - 710 PB - International Statistical Institute CY - Voorburg ER - TY - JOUR A1 - Kortenkamp, Ulrich A1 - Monaghan, John A1 - Trouche, Luc T1 - Jonathan M Borwein (1951-2016): exploring, experiencing and experimenting in mathematics - an inspiring journey in mathematics JF - Educational studies in mathematics : an international journal Y1 - 2016 U6 - https://doi.org/10.1007/s10649-016-9729-0 SN - 0013-1954 SN - 1573-0816 VL - 93 SP - 131 EP - 136 PB - Springer CY - Dordrecht ER - TY - JOUR A1 - Eichmair, Michael A1 - Metzger, Jan T1 - JENKINS-SERRIN-TYPE RESULTS FOR THE JANG EQUATION JF - Journal of differential geometry N2 - Let (M, g, k) be an initial data set for the Einstein equations of general relativity. We show that a canonical solution of the Jang equation exists in the complement of the union of all weakly future outer trapped regions in the initial data set with respect to a given end, provided that this complement contains no weakly past outer trapped regions. The graph of this solution relates the area of the horizon to the global geometry of the initial data set in a non-trivial way. We prove the existence of a Scherk-type solution of the Jang equation outside the union of all weakly future or past outer trapped regions in the initial data set. This result is a natural exterior analogue for the Jang equation of the classical Jenkins Serrin theory. We extend and complement existence theorems [19, 20, 40, 29, 18, 31, 11] for Scherk-type constant mean curvature graphs over polygonal domains in (M, g), where (M, g) is a complete Riemannian surface. We can dispense with the a priori assumptions that a sub solution exists and that (M, g) has particular symmetries. Also, our method generalizes to higher dimensions. Y1 - 2016 U6 - https://doi.org/10.4310/jdg/1453910454 SN - 0022-040X SN - 1945-743X VL - 102 SP - 207 EP - 242 PB - International Press of Boston CY - Somerville ER - TY - JOUR A1 - Keller, Matthias A1 - Münch, Florentin A1 - Pogorzelski, Felix T1 - Geometry and spectrum of rapidly branching graphs JF - Mathematische Nachrichten N2 - We study graphs whose vertex degree tends to infinity and which are, therefore, called rapidly branching. We prove spectral estimates, discreteness of spectrum, first order eigenvalue and Weyl asymptotics solely in terms of the vertex degree growth. The underlying techniques are estimates on the isoperimetric constant. Furthermore, we give lower volume growth bounds and we provide a new criterion for stochastic incompleteness. (C) 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim KW - Graph Laplacians KW - discrete spectrum KW - eigenvalue asymptotics KW - isoperimetric estimates KW - stochastic completeness Y1 - 2016 U6 - https://doi.org/10.1002/mana.201400349 SN - 0025-584X SN - 1522-2616 VL - 289 SP - 1636 EP - 1647 PB - Wiley-VCH CY - Weinheim ER - TY - JOUR A1 - Tinpun, Kittisak A1 - Koppitz, Jörg T1 - Generating sets of infinite full transformation semigroups with restricted range JF - Acta scientiarum mathematicarum N2 - In the present paper, we consider minimal generating sets of infinite full transformation semigroups with restricted range modulo specific subsets. In particular, we determine relative ranks. KW - generating sets KW - transformation semigroups KW - restricted range KW - relative ranks Y1 - 2016 U6 - https://doi.org/10.14232/actasm-015-502-4 SN - 0001-6969 VL - 82 SP - 55 EP - 63 PB - Institutum Bolyaianum Universitatis Szegediensis CY - Szeged ER - TY - JOUR A1 - Keller, Matthias A1 - Mugnolo, Delio T1 - General Cheeger inequalities for p-Laplacians on graphs JF - Theoretical ecology N2 - We prove Cheeger inequalities for p-Laplacians on finite and infinite weighted graphs. Unlike in previous works, we do not impose boundedness of the vertex degree, nor do we restrict ourselves to the normalized Laplacian and, more generally, we do not impose any boundedness assumption on the geometry. This is achieved by a novel definition of the measure of the boundary which uses the idea of intrinsic metrics. For the non-normalized case, our bounds on the spectral gap of p-Laplacians are already significantly better for finite graphs and for infinite graphs they yield non-trivial bounds even in the case of unbounded vertex degree. We, furthermore, give upper bounds by the Cheeger constant and by the exponential volume growth of distance balls. (C) 2016 Elsevier Ltd. All rights reserved. KW - Cheeger inequalities KW - Spectral theory of graphs KW - Intrinsic metrics for Dirichlet forms Y1 - 2016 U6 - https://doi.org/10.1016/j.na.2016.07.011 SN - 0362-546X SN - 1873-5215 VL - 147 SP - 80 EP - 95 PB - Elsevier CY - Oxford ER - TY - JOUR A1 - Beinrucker, Andre A1 - Dogan, Urun A1 - Blanchard, Gilles T1 - Extensions of stability selection using subsamples of observations and covariates JF - Statistics and Computing N2 - We introduce extensions of stability selection, a method to stabilise variable selection methods introduced by Meinshausen and Buhlmann (J R Stat Soc 72:417-473, 2010). We propose to apply a base selection method repeatedly to random subsamples of observations and subsets of covariates under scrutiny, and to select covariates based on their selection frequency. We analyse the effects and benefits of these extensions. Our analysis generalizes the theoretical results of Meinshausen and Buhlmann (J R Stat Soc 72:417-473, 2010) from the case of half-samples to subsamples of arbitrary size. We study, in a theoretical manner, the effect of taking random covariate subsets using a simplified score model. Finally we validate these extensions on numerical experiments on both synthetic and real datasets, and compare the obtained results in detail to the original stability selection method. KW - Variable selection KW - Stability selection KW - Subsampling Y1 - 2016 U6 - https://doi.org/10.1007/s11222-015-9589-y SN - 0960-3174 SN - 1573-1375 VL - 26 SP - 1059 EP - 1077 PB - Springer CY - Dordrecht ER - TY - INPR A1 - Dereudre, David A1 - Mazzonetto, Sara A1 - Roelly, Sylvie T1 - Exact simulation of Brownian diffusions with drift admitting jumps N2 - Using an algorithm based on a retrospective rejection sampling scheme, we propose an exact simulation of a Brownian diffusion whose drift admits several jumps. We treat explicitly and extensively the case of two jumps, providing numerical simulations. Our main contribution is to manage the technical difficulty due to the presence of two jumps thanks to a new explicit expression of the transition density of the skew Brownian motion with two semipermeable barriers and a constant drift. T3 - Preprints des Instituts für Mathematik der Universität Potsdam - 5 (2016) 7 KW - exact simulation method KW - skew Brownian motion KW - skew diffusion KW - Brownian motion with discontinuous drift Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-91049 SN - 2193-6943 VL - 5 IS - 7 PB - Universitätsverlag Potsdam CY - Potsdam ER - TY - JOUR A1 - Wichitsa-Nguan, Korakot A1 - Läuter, Henning A1 - Liero, Hannelore T1 - Estimability in Cox models JF - Statistical Papers N2 - Our procedure of estimating is the maximum partial likelihood estimate (MPLE) which is the appropriate estimate in the Cox model with a general censoring distribution, covariates and an unknown baseline hazard rate . We find conditions for estimability and asymptotic estimability. The asymptotic variance matrix of the MPLE is represented and properties are discussed. KW - Cox model KW - Estimability KW - Asymptotic variance of maximum partial likelihood estimate Y1 - 2016 U6 - https://doi.org/10.1007/s00362-016-0755-x SN - 0932-5026 SN - 1613-9798 VL - 57 SP - 1121 EP - 1140 PB - Springer CY - New York ER - TY - THES A1 - Chutsagulprom, Nawinda T1 - Ensemble-based filters dealing with non-Gaussianity and nonlinearity Y1 - 2016 ER - TY - JOUR A1 - Schroeter, M-A A1 - Ritter, M. A1 - Holschneider, Matthias A1 - Sturm, H. T1 - Enhanced DySEM imaging of cantilever motion using artificial structures patterned by focused ion beam techniques JF - Journal of micromechanics and microengineering N2 - We use a dynamic scanning electron microscope (DySEM) to map the spatial distribution of the vibration of a cantilever beam. The DySEM measurements are based on variations of the local secondary electron signal within the imaging electron beam diameter during an oscillation period of the cantilever. For this reason, the surface of a cantilever without topography or material variation does not allow any conclusions about the spatial distribution of vibration due to a lack of dynamic contrast. In order to overcome this limitation, artificial structures were added at defined positions on the cantilever surface using focused ion beam lithography patterning. The DySEM signal of such high-contrast structures is strongly improved, hence information about the surface vibration becomes accessible. Simulations of images of the vibrating cantilever have also been performed. The results of the simulation are in good agreement with the experimental images. KW - FIB patterning KW - structured cantilever KW - AFM KW - modal analysis KW - DySEM Y1 - 2016 U6 - https://doi.org/10.1088/0960-1317/26/3/035010 SN - 0960-1317 SN - 1361-6439 VL - 26 PB - IOP Publ. Ltd. CY - Bristol ER - TY - THES A1 - Pirhayati, Mohammad T1 - Edge operators and boundary value problems Y1 - 2016 ER - TY - JOUR A1 - Tarkhanov, Nikolai Nikolaevich T1 - Deformation quantization and boundary value problems JF - International journal of geometric methods in modern physics : differential geometery, algebraic geometery, global analysis & topology N2 - We describe a natural construction of deformation quantization on a compact symplectic manifold with boundary. On the algebra of quantum observables a trace functional is defined which as usual annihilates the commutators. This gives rise to an index as the trace of the unity element. We formulate the index theorem as a conjecture and examine it by the classical harmonic oscillator. KW - Symplectic manifold KW - star product KW - trace KW - index Y1 - 2016 U6 - https://doi.org/10.1142/S0219887816500079 SN - 0219-8878 SN - 1793-6977 VL - 13 SP - 176 EP - 195 PB - World Scientific CY - Singapore ER - TY - THES A1 - Berner, Nadine T1 - Deciphering multiple changes in complex climate time series using Bayesian inference T1 - Bayes'sche Inferenz als diagnostischer Ansatz zur Untersuchung multipler Übergänge in komplexen Klimazeitreihen N2 - Change points in time series are perceived as heterogeneities in the statistical or dynamical characteristics of the observations. Unraveling such transitions yields essential information for the understanding of the observed system’s intrinsic evolution and potential external influences. A precise detection of multiple changes is therefore of great importance for various research disciplines, such as environmental sciences, bioinformatics and economics. The primary purpose of the detection approach introduced in this thesis is the investigation of transitions underlying direct or indirect climate observations. In order to develop a diagnostic approach capable to capture such a variety of natural processes, the generic statistical features in terms of central tendency and dispersion are employed in the light of Bayesian inversion. In contrast to established Bayesian approaches to multiple changes, the generic approach proposed in this thesis is not formulated in the framework of specialized partition models of high dimensionality requiring prior specification, but as a robust kernel-based approach of low dimensionality employing least informative prior distributions. First of all, a local Bayesian inversion approach is developed to robustly infer on the location and the generic patterns of a single transition. The analysis of synthetic time series comprising changes of different observational evidence, data loss and outliers validates the performance, consistency and sensitivity of the inference algorithm. To systematically investigate time series for multiple changes, the Bayesian inversion is extended to a kernel-based inference approach. By introducing basic kernel measures, the weighted kernel inference results are composed into a proxy probability to a posterior distribution of multiple transitions. The detection approach is applied to environmental time series from the Nile river in Aswan and the weather station Tuscaloosa, Alabama comprising documented changes. The method’s performance confirms the approach as a powerful diagnostic tool to decipher multiple changes underlying direct climate observations. Finally, the kernel-based Bayesian inference approach is used to investigate a set of complex terrigenous dust records interpreted as climate indicators of the African region of the Plio-Pleistocene period. A detailed inference unravels multiple transitions underlying the indirect climate observations, that are interpreted as conjoint changes. The identified conjoint changes coincide with established global climate events. In particular, the two-step transition associated to the establishment of the modern Walker-Circulation contributes to the current discussion about the influence of paleoclimate changes on the environmental conditions in tropical and subtropical Africa at around two million years ago. N2 - Im Allgemeinen stellen punktuelle Veränderungen in Zeitreihen (change points) eine Heterogenität in den statistischen oder dynamischen Charakteristika der Observablen dar. Das Auffinden und die Beschreibung solcher Übergänge bietet grundlegende Informationen über das beobachtete System hinsichtlich seiner intrinsischen Entwicklung sowie potentieller externer Einflüsse. Eine präzise Detektion von Veränderungen ist daher für die verschiedensten Forschungsgebiete, wie den Umweltwissenschaften, der Bioinformatik und den Wirtschaftswissenschaften von großem Interesse. Die primäre Zielsetzung der in der vorliegenden Doktorarbeit vorgestellten Detektionsmethode ist die Untersuchung von direkten als auch indirekten Klimaobservablen auf Veränderungen. Um die damit verbundene Vielzahl an möglichen natürlichen Prozessen zu beschreiben, werden im Rahmen einer Bayes’schen Inversion die generischen statistischen Merkmale Zentraltendenz und Dispersion verwendet. Im Gegensatz zu etablierten Bayes’schen Methoden zur Analyse von multiplen Übergängen, die im Rahmen von Partitionsmodellen hoher Dimensionalität formuliert sind und die Spezifikation von Priorverteilungen erfordern, wird in dieser Doktorarbeit ein generischer, Kernel-basierter Ansatz niedriger Dimensionalität mit minimal informativen Priorverteilungen vorgestellt. Zunächst wird ein lokaler Bayes’scher Inversionsansatz entwickelt, der robuste Rückschlüsse auf die Position und die generischen Charakteristika einer einzelnen Veränderung erlaubt. Durch die Analyse von synthetischen Zeitreihen die dem Einfluss von Veränderungen unterschiedlicher Signifikanz, Datenverlust und Ausreißern unterliegen wird die Leistungsfähigkeit, Konsistenz und Sensitivität der Inversionmethode begründet. Um Zeitreihen auch auf multiple Veränderungen systematisch untersuchen zu können, wird die Methode der Bayes’schen Inversion zu einem Kernel-basierten Ansatz erweitert. Durch die Einführung grundlegender Kernel-Maße können die Kernel-Resultate zu einer gewichteten Wahrscheinlichkeit kombiniert werden die als Proxy einer Posterior-Verteilung multipler Veränderungen dient. Der Detektionsalgorithmus wird auf reale Umweltmessreihen vom Nil-Fluss in Aswan und von der Wetterstation Tuscaloosa, Alabama, angewendet, die jeweils dokumentierte Veränderungen enthalten. Das Ergebnis dieser Analyse bestätigt den entwickelten Ansatz als eine leistungsstarke diagnostische Methode zur Detektion multipler Übergänge in Zeitreihen. Abschließend wird der generische Kernel-basierte Bayes’sche Ansatz verwendet, um eine Reihe von komplexen terrigenen Staubdaten zu untersuchen, die als Klimaindikatoren der afrikanischen Region des Plio-Pleistozän interpretiert werden. Eine detaillierte Untersuchung deutet auf multiple Veränderungen in den indirekten Klimaobservablen hin, von denen einige als gemeinsame Übergänge interpretiert werden. Diese gemeinsam auftretenden Ereignisse stimmen mit etablierten globalen Klimaereignissen überein. Insbesondere der gefundene Zwei-Stufen-Übergang, der mit der Ausbildung der modernen Walker-Zirkulation assoziiert wird, liefert einen wichtigen Beitrag zur aktuellen Diskussion über den Einfluss von paläoklimatischen Veränderungen auf die Umweltbedingungen im tropischen und subtropischen Afrika vor circa zwei Millionen Jahren. KW - kernel-based Bayesian inference KW - multi-change point detection KW - direct and indirect climate observations KW - Plio-Pleistocene KW - (sub-) tropical Africa KW - terrigenous dust KW - kernel-basierte Bayes'sche Inferenz KW - Detektion multipler Übergänge KW - direkte und indirekte Klimaobservablen KW - Plio-Pleistozän KW - (sub-) tropisches Afrika KW - terrigener Staub Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-100065 ER -